RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Why we need new regulations to protect us from Facebook and Equifax

Why we need new regulations to protect us from Facebook and Equifax
by Vivek Wadhwa on 10-06-2017 at 12:00 pm

The theft of an estimated 143 million Americans’ personal details in a data breach of consumer-credit reporting agency Equifax and the Russian hack of the U.S. elections through Facebook had one thing in common: they were facilitated by the absence of legal protection for personal data. Though the U.S. Constitution provides Americans with privacy rights and freedoms, it doesn’t protect them from modern-day scavengers who obtain information about them and use it against them. Our privacy laws were designed during the days of the telegraph and are badly in need of modernization. Much damage has already been done to our finances, privacy, and democracy—but worse lies ahead.

Credit bureaus have long been gathering information about our earnings, spending habits, and loan-repayment histories in order to determine our credit-worthiness. Tech companies have taken this one step further, monitoring our web-surfing habits, emails, and phone calls. Via social media, we have volunteered information on our friends and our likes and dislikes, and shared family photographs. Our smartphones know everywhere we go and can keep track of our health and emotions. Smart TVs, internet-enabled toys, and voice-controlled bots are monitoring what we do in our homes—and often are recording it.

In the land-grab for data, there were no clear regulations about who owned what, so tech companies staked claims to everything. Facebook required its users to grant it “a non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content” they posted to the site. It effectively required them to give it the right to use their family photos and videos for marketing purposes and to resell them to anybody. American laws are so inadequate that such companies are not even required to tell consumers what information they are gathering and how they will use it.

Unlike manufacturers liable for the safety of their products, tech companies gathering our data have practically no liability for compromising it; they can protect it as they choose and sell it to whomever they want to—regardless of how the third party will use it. No wonder Equifax had such lax security or that Russians and hate groups were able to target the susceptible with misinformation on Facebook.

The problem of data brokers’ not being required to provide industrial-strength security can possibly be fixed by the FTC. University of California at Berkeley law professor Pamela Samuelson says that it has “statutory authority to regulate unfair and deceptive practices, it can act on that authority by initiating claims against those who fail to maintain adequate security.” She notes that the FTC has used these powers before, by nudging firms to have privacy and security policies. And when firms failed to comply with their own policies, the FTC treated that as an unfair and deceptive practice.

This would level the playing field by making data brokers as responsible for their actions as most product manufacturers are for theirs. We hold our car manufacturers responsible for the safety of their products; why shouldn’t the tech companies bear similar responsibility?

New legislation could be enacted too. But Samuelson says that the data holders would fight it even harder. And, though it will be a good step forward, it will only solve yesterday’s problems.

Its falling costs will soon make DNA sequencing as common blood tests, and the tech companies that today ask us to upload our photos will tomorrow ask us to upload our genomic information. Technology will be able also to understand our mental state and emotions. These data will encompass everything that differentiates us as human beings, including our genetics and psychology. Whilst credit reports could result in withholding of loans, corporate use of our genetic data could affect our jobs and livelihoods. We could be singled out for having genetic predispositions to crime or disease and find ourselves discriminated against in new ways.

The Genetic Information Nondiscrimination Act of 2008 prohibits the use of genetic information in health insurance and employment. But it provides no protection from discrimination in such matters as long-term care, disability, housing, and life insurance, and it places few limits on commercial use. There are no laws to stop companies from using aggregated genomic data in the same way that lending companies and employers use social-media data, or to prevent marketers from targeting ads at people with genetic defects.

Some states have begun passing laws to say that your DNA data is your property; but we need federal laws that stipulate that we own all of our own data, even if it takes an amendment to the Constitution. The right to decide what information we want to share and the right to know how it is being used are fundamental human rights in this era of advancing technologies.

Harvard Law School professor Lawrence Lessig has argued that privacy should be protected via property rights rather than via liability rules—which don’t prevent somebody from taking your data without your consent, with payment later. A property regime would keep data control with the person holding the property right. “When you have a property right, before someone takes your property they must negotiate with you about how much it is worth”, argues Lessig. Imagine a website that allowed you to manage all of your data, including those generated by the devices in your house, and to charge interested companies license fees for its use. That is what would become possible.

Daniel J. Solove, Professor of Law at George Washington University Law School, has reservations about protecting privacy as a form of property right, because the “market approach has difficulty assigning the proper value to personal information”. He worries that although to an individual giving out bits of information in different contexts, each transfer may appear innocuous, the information could be aggregated and become invasive when combined with other information. “It is the totality of information about a person and how it is used that poses the greatest threat to privacy”, he says.

It isn’t going to be easy to develop the new systems for maintaining control of personal information, but it is imperative that we start discussing solutions. As Thomas Jefferson said in 1816: “Laws and institutions must go hand in hand with the progress of the human mind. As that becomes more developed, more enlightened, as new discoveries are made, new truths disclosed, and manners and opinions change with the change of circumstances, institutions must advance also, and keep pace with the times.”

For more, please read my book, Driver in the Driverless Car. It explains the choices we must make to create an amazing future.


Driver Distraction? Don’t Look at Me!

Driver Distraction? Don’t Look at Me!
by Roger C. Lanctot on 10-06-2017 at 7:00 am

The Automobile Club of America’s (AAA) ongoing battle with driver distraction, among other issues such as drowsy driving and teen driving, turns a new page with a report flagging 23 of 30 tested in-vehicle infotainment systems as demanding that drivers pay a high or very high level of attention when performing tasks with the vehicle in motion. The report arrives as car makers have almost completely abdicated responsibility for in-dash interfaces with the acceptance and adoption of Apple’s CarPlay and Alphabet’s Android Auto smartphone interfaces.

“New Cars Have More Distracting Technology on Board Than Ever Before” – Washingtonpost.com

A growing proportion of automotive infotainment systems come outfitted with one or both of the Silicon Valley-sourced solutions. Every head unit with Android Auto or Apple CarPlay has had to go through a certification process from one or both of those companies before coming to market. This has made Apple and Alphabet either driver distraction gatekeepers or enablers.

In fact, the Android operating system is steadily insinuating itself into these very same systems and will likely beging arriving in production vehicles as a native OS beginning late in 2018. This means the dashboard screens in cars and the user interfaces on those screens will be increasingly dictated by Alphabet and Apple.

This has to be maddening to AAA, which has long stressed the cognitive distraction of smartphone use in cars regardless of whether they are used hands-free or not. AAA has never been able to convince regulators to completely forbid smartphone use of any kind in moving vehicles and Apple CarPlay and Android Auto smartphone integration systems were seen as one way to mitigate distraction.

Now, it seems, these systems have actually opened up a Pandora’s box of tempting app-based distractions that may be undoing the intended prophylactic. The National Highway Traffic Safety Administration (NHTSA) estimates that 3,477 people were killed in crashes attributed to driver distraction in 2015 and that the toll is rising.

AAA claims that the current systems violate NHTSA distraction guidelines, which are voluntary, not compulsory. For car makers, though, the rising influence of Apple and Alphabet means there is an increasing abdication of responsibility for distraction. The temptation for car makers is to blame Apple and Alphabet – that is, even though car makers are responsible for the interfaces that allow users to switch back to OEM-supplied interfaces or the car radio.

The AAA report arrives as car makers find themselves on a slippery artificial intelligence slope. While some auto makers are working on AI systems (like IBM Watson-infused OnStar Go from GM) designed to provide personalized AI systems built into the cars – Apple and Alphabet are poised to step up their in-dash game with artificial intelligence systems of their own designed to further mitigate distraction by emphasizing voice commands over touch screen interfaces.

This battle will ultimately unfold over access to vehicle sensor data, something that automakers have strenuously sought to wall off from Apple’s and Alphabet’s smartphone platforms. But if pressure grows from regulators, car makers may be forced into the arms of Apple and Alphabet who may have greater resources to bring to the challenge of mitigating distraction based on the complete sensor-infused driving context.

Alphabet certainly has an edge here as Android begins penetrating dashboards as a native operating system. AAA would likely not approve given its opposition to smartphone use in vehicles of any kind. The only alternative may be to turn the entire car into a smartphone on wheels – which, come to think of it, is more or less what is happening. Or maybe revert to the regular old car radio. Yeah, right.


Presto Engineering – Outsourced Secure Provisioning – Even Their Secrets Have Secrets

Presto Engineering – Outsourced Secure Provisioning – Even Their Secrets Have Secrets
by Mitch Heins on 10-05-2017 at 12:00 pm

When I first heard about Presto Engineering I was enamored by a statement on their web site that claimed that one of their secured solutions included, “The ability to incorporate your secrets without knowing them”. If Mr. Spock would have been in the room his eyebrow would have certainly raised. Indeed, what does that statement mean?

It turns out that in the world of the Internet of Things (IoT), almost every device has a communications link associated with it and that link is vulnerable to attack. As a result, companies building IoT systems are working feverishly to incorporate security into their devices. While security can be “programmed” into your software, almost everyone is now using hardware features to make their IoT systems more secure. And, though there are several different types of hardware security measures that can be employed, almost all of them require some type of “provisioning”. Presto Engineering is one of the companies that really knows how to do this step well.

So what is the provisioning thing and why is it so important? This brings us full circle to the statement that raised our proverbial eyebrow. Provisioning is the process whereby the secrets necessary for security functions are incorporated into individual IoT devices. The trick here is that the chain of secrecy for this data must be such that even the people doing the provisioning of an IoT chip can’t know the secrets. Yep, you heard that right. The last thing you want to do is go to the trouble of building a highly secured IoT chip only to have your secure UIDs, transport keys, authorization certificates etc. get intercepted and compromised before they ever get loaded into the chip. So, the secrets being loaded have to be secret to everyone including the company doing the actual physical provisioning.

Depending on the end IoT application, there will be different levels of secrecy and control required. In fact, the industry has even set up procedures to provision secure chips in a way that can be audited according to a set of international standards known as the Common Criteria for Information Technology Security Evaluation. These criteria specify what is known as Evaluation Assurance Levels (EALs) that range on a scale from an EAL1 (minimal) to EAL7 (government and military high security). EAL5 is typical for highly demanding commercial applications such as banks, payment, pay TV, secure access control systems, etc. The important part is that the “provisioner” uses rigorously controlled and auditable processes to securely handle its customers’ secrets while ensuring absolute integrity and confidentiality of this operation.

Security hardware can be provisioned at the wafer level during wafer probe; at the chip level, after packaging; or at the board level once the chips are placed on the board. Depending on where the provisioning is done, there will be processes that need to be in place to ensure that “secret data” can be securely transmitted to the provisioner. Additionally, the provisioner will also need to ensure that the data will remain secure until it can be physically encoded into the ICs. This implies secure connections and servers between the secret data supplier and the various manufacturing sites where the provisioning is done.

There are several strategies that can be employed to insure data integrity for the EAL required. The higher the EAL, the costlier the strategies get. For a company like Presto Engineering, the trick is to have the ability to customize the offerings to enable enough security for the requirement while minimizing the costs to their customers.

Presto, in fact, does just that. They have a comprehensive and flexible IT system that allows them to connect customers’ secret data with dedicated data storage rooms while complying with different EAL requirements. As an example, if the EAL is relatively low, Presto may be allowed to use virtual machines on shared servers to keep different customers’ data separated. By sharing servers, they can keep costs down.

Alternatively, if the EAL requirement is high, the customer may demand their data be handled only on customized high-security servers (also known as hardware security modules or HSMs). Per the diagram, the HSMs may be owned by Presto or by the customer. In either case, the more dedicated and more secure, the higher the cost and greater the lead time to deploy the provisioning systems.

In addition to data storage, Presto has secured test floors (EAL5+/6) and secured warehouses where provisioned parts are kept until they are shipped via secure methods to their customers’ locations. Presto also has expert trained staff operating secured flows who can assist customers in preparing devices for standards certification such as secure element card EMVCo testing (Eurocard, MasterCard and Visa).

Many enterprise level companies handle these provisioning tasks by themselves, however in the world of IoT, there are large numbers of small and medium sized enterprises (SME) for which this would be a daunting task. Nearly all the places where provisioning takes place are handled outside the walls of the SMEs, putting their secret data at risk. SMEs really need to look to outsourced provisioners to manage their costs, schedule and security risks.

Ideally, an outsourced provisioner should offer certain key capabilities:
[LIST=1]

  • A standardized and certified (EAL5+) secure process.
  • The ability to provision a wide range of device types, form factors and security technologies.
  • Competitive pricing at low and medium volumes with the ability to scale to larger volumes.
  • Ability to configure the provisioning process and infrastructure to meet varying requirements

    Presto Engineering offers all these capabilities and more.

    They fill a significant void by giving IoT SMEs a trusted partner who can literally, “incorporate their secrets without knowing them”. In the words of Spock, “Indeed” … to which we respond, “Presto Engineering, ahead, warp-factor 5”.

    See also:
    Secure Provisioning White Paper
    Presto Engineering Solutions Page

    Presto Engineering, Inc. is a world-class turnkey production solution for IoT, secure and high-speed (5G) deviceshelping chipmakers accelerate time-to-market and achieve high-volume manufacturing – without having to investin operations teams and capital equipment. The company offers a global, flexible, and dedicated framework, withheadquarters in the Silicon Valley, and operations in Europe and Asia.

    Presto has operations in 7 locations worldwide. All secure provisioning facilities are certified EAL5+ and are auditedannually by major bank and payment organizations. Presto has more than 50 secure provisioning experts on itstechnical staff. The company ships more than 100 million units annually and has securely provisioned more than abillion products. Customers include: major access control, pay TV, telecom, banking, and networking companies.


  • An Informal Update

    An Informal Update
    by Bernard Murphy on 10-05-2017 at 7:00 am

    I mentioned back in June that Synopsys had launched a blog on formal verification, intended to demystify the field and provide help in understanding key concepts. It’s been a few months, time to check in on some of their more recent posts.


    First up, it feels like they are finding their groove. Relaxed style, useful topics but now with a little more polish. Not marketing polish (heaven forbid), just good, sometimes even witty writing style. That makes these blogs (running now at about 2 per month) fun to read, which in turn should make them all the more effective in helping us become comfortable and more knowledgeable about the domain. Following is a quick (and incomplete) summary of a few that caught my eye.

    Iain Singleton (a fellow Brit) blogged on abstractions and why we should feel perfectly comfortable with this idea. In formal it is often necessary to replace a complex block of logic with a simplified FSM, modeling just the interesting corner behaviors of that block for the purposes of the current verification objective. Iain draws a parallel with his support commutes (in England) from Lancashire to the South East. He relates something every commuter will instantly understand – sometimes you look up and don’t know where you are or how to get where you are going. Yet you continue on auto-pilot without missing a beat. You can do that because you have subconsciously abstracted the route. Your brain doesn’t worry about intermediate details, it just remembers the principal milestones. Which is exactly what you are doing when you build an abstraction for a formal proof.

    Abhishek Muchandikar wrote a blog on how to develop confidence that you can signoff in the presence of inconclusive proofs, an important question many ask about formal. Abhishek uses another analogy I like – the (ancient) Greek phalanx formation (don’t you love it when engineers undermine the nerd myth by both knowing this stuff and using it in their writing). The phalanx formation was designed to be close to impenetrable and unstoppable, but naturally was no stronger than its weakest soldier. A similar concept applies to formal signoff in the presence of inconclusives (bounded proofs). For these cases you can run a bounded coverage analysis to the same depth to understand what parts of the code were not reached in this bounded proof, and what constraints (if any) may have limited reachability. All bounded proofs will have a weakest soldier, but adequate analysis on that weakness can lead you to confidently assert that the weakness is acceptable.

    Anders Nordstrom posted a blog on a couple of exotic usages for formal, taken from this year’s DAC. One was on hardware/software co-verification using formal methods. I was able to find an open link to the first paper he names but not the second. The first method uses specialized formal tools which can do formal proving in both RTL and C, so is perhaps a little out of reach of most of us. However, I could imagine you might in some cases model the software through an (RTL FSM) abstraction which could then be included in the hardware proof. The second exotic application Anders mentions is formal verification of mixed signal designs. There’s in fact a fairly rich list of papers around this domain. TI presented a paper at DAC on using abstract models for interface blocks in applying formal methods to verify interface behavior.

    I’ll wrap up with one more blog, from Sean Safarpour, on the organic growth of formal verification. This is interesting in part to understand where formal adoption is growing but in some ways even more to understand how that is happening. Sean said that in previous trips to Asia he found that he was pushing formal, interest was high but progress between visits was limited. The climate was completely different on his most recent trip; he didn’t have to pitch, customers were now pitching him on their successes in using formal and how they now need help to further expand usage.

    Sean makes some interesting points on how this happened. The myth that formal requires deep formal expertise still persists, yet Sean says that there are probably only a dozen or so such people in Asia. Hiring more expertise is therefore not a practical solution to expanding usage. Instead expertise has to be grown organically, yet companies can’t afford to wait for years for engineers to become deep experts. Instead they are seeding interest in verification groups (through conference attendance for example), encouraging a champion, growing a pilot group around that champion, starting with the simpler formal applications (apps) and socializing successes within engineering and also up the management chain. I suspect many other successful adoptions started in the same way. Good input for those of you still on the fence.

    You can access the full set of blogs HERE.


    Is there anything in VLSI layout other than pushing polygons? (2)

    Is there anything in VLSI layout other than pushing polygons? (2)
    by Dan Clein on 10-04-2017 at 12:00 pm

    One of the important changes that happen between 1984 and 1988 is the hardware platforms development. Calma evolved, mainframe S140 with 2 combined monitors per terminal in S280 with 2 individual monitors per terminal. This meant that from noisy and darker rooms we move to more quiet and lighted rooms. We doubled the speed and the memory in our disks. We had 2 whopping disks of 512Mb each and each was 100K US dollars.We also had 2 powerful color plotters from Versatec. Part of the layout DRC was done by hand using plots at x1000 or x10000 scale and plastic rulers. These plotters needed climate control room sat 20-21 Celsius and 40-65% humidity. A nice and refreshing room to chill when coming from a tropical day outside in Tel Aviv area. The room was also used to host the Calma mainframe and the console which we used for backups.

    Even so Calma improved in speed, memory and terminals, there was no network connection between the 2 computers. You had to use tapes (!) to transfer data from S140 to S280. Later Calma tried individual workstations, but unfortunately no network between them again, you needed to write “cassette tapes” between the workstations and the mainframe. So being a layout designer meant you need know how to write & read tapes, prepare daily and weekly backups, prepare data for IBM for verifications, align plotter paper and change ink, do maintenance on computer and plotters, etc… Suddenly opportunity to expand the knowledge was there. The only thing you needed to do is volunteer.

    At the same time Daisy decided to bring to market a layout tool to conquer the layout market, as they had the circuit world. As an experienced layout designer, I had the option to try it, so I volunteered. Went for a week at Daisy Corporation in Israel and got trained in ChipMaster 3.0, the crown jewel of Daisy at that time. Well, one-week training and a few weeks testing and the prognosis was bleak, they were missing important parts of the flow needed to use this software as augmentation to Calma or just replacement. Biggest flaw was that they were not capable to generate and read standard GDSII. For somebody to use this software in production this was a showstopper. So, I decided that we are not going to use them.

    Guess what, I was right, Daisy had a short life afterwards. But again, was time to look for something better than Calma and the market had enough processors to just do that. Like Motorola Israel, Austin team was looking at options and 2 just showed up around 1986. CAECO a software from Silicon Compilers, a future Mentor Graphics and SDA part of future CADENCE. After a few demos and benchmarking Motorola went for CAECO as the software had circuit and layout design tools, still not linked in a network, but a least the design constraints were the same. We still used printed paper for schematics (mostly for approval controls)but in the following years we were able to open schematics and layout on the same screen. How is this for a technological revolution? What about moving from a pen to a mouse with 2 buttons. Now we had select and functions in one device, 100% time savings. This was a big jump in productivity and Motorola migrated MASKAP verification software from IBM to UNIX and we were now capable to run verification locally on each machine! Talking about productivity boost! Not only we had unlimited licenses but with local CAD we developed additional Layout Design Driven DRC verifications.

    As we started standard cells, datapath cells, I/O cells, we invented specific design rules for each type of design. Onan interesting note, the first SUN machines we used were powered by 68000 processors previously designed in Austin. We progressed as the processors progressed. I am a very proud layout designer as I knew the layout project leader for 68030 Beverly Vann and worked later with the 68040 leader Geno Browning.

    As we started to build new structures, we needed to build new tools. The CAD department grew to 10 Masters and PhD in software in no time and the ideas started pouring. We had to build memories (in a multi usage process –kind of SRAM) and we needed a way to automatically code the YES (1) and NO (0)in it. Our chips had firmware that had to be loaded (coded) with contacts and diffusion. From the day the base layers were ready, almost every week we had anew netlist (coding). So the last week before tapeout the firmware team will provide a final code (netlist) and we run the generator scripts and final verifications. Guess what, I volunteered there also and learnt a few more things…

    The next big thing I was involved was testing of a structure called programmable logic array (PLA). This is a kind of programmable logic block used to implement combinational logic circuits. Esher Haritan was the CAD guy who had to implement the new “tool” and I volunteered to generate for him all combinational layouts to test this new “beast”. We had a lot of fun and from that date we are still friends after 30 years… I guess this was a great experience as I started to work with CAD on roadmaps for future tools.

    But process technology was moving forward and we migrated to 2 layers of metal (!). Now we could have “metal directions”. As we used for local interconnect metal 1, we decided, like almost everybody in the world, to use metal 2 vertical to getoutside of the cells into busses! Well if we have 2 metals and we can plan to have some simple functions “ready” in layout we, like others invented our own standard cells (functional gates) library. At first a standard cell library was just a collection of simple gates, inverters, nands, nors, flip-flops, buffers and spare cells. We just built them on a fix height all these “gates” that were later use by all designers so the layout can be done much faster. What we did do like today, we build cell length as a multiple of the VIA to VIA spacing in metal 2 (PITCH).

    We did not have modeling, or extraction of cells to use, not information about internal timing from input to output. But this was already a productivity improvement. I did cells for a few months but blocks were schematic driven and the placement and routing was manual. Tired of manual work, I started to ask around if there are tools to place and route this automatically.I heard from others that somebody in UK invented a Place & Route tool. It was called TanCell and it came from a company called Tangent. We got in touch with them and asked what is needed for a benchmark, humans versus the new software. We invited the representative to come and work inside Motorola Israel, so we had a vendor AE onsite.

    After a fail start with a poor AE we got Tommy Belpasso, who was at that time their best expert. As I was the owner of the block and the library we spent 3 weeks together. The tool was crude and to make it work you had to modify libraries specifically for TOOL limitation. No more free imagination, now you had to create cells with pins in the center as the tool was the “grandfather” of channel based Place & Route. We made it but in this battle the effort was too great. What I learnt in 3 weeks was that a good AE can make a poor tool work by finding work arounds and solving on the spot issues. I was lucky to meet a few more AEs like Tommy in my later life…

    One interesting development from this experiment was that we developed standard cells with the vertical metal 2 tracks included from top to bottom, at VIA to VIA pitch, but hidden as TEXT layer for M2. When we run verifications, the CAD wrote an “on the fly script” that will translate M2 Text into M2 (temporarily on GDSII creation) so we could verify at cell level that when the routing will go over, it will still be DRC clean. The lesson learnt again was that you can always get something new from another domain, in this case digital P&R and apply to improve your flow/methodology, etc. This is why volunteering to learn things not in my working duties was always appealing.

    More about how a layout designer can have “spice” in their profession next time.

    Dan Clein
    CMOS IC Layout Concepts, Methodologies and Tools

    Also read: Is there anything in VLSI layout other than “pushing polygons”? (3)


    Semiconductor IP on Fortune’s 2017 100 Fastest-Growing Companies List!

    Semiconductor IP on Fortune’s 2017 100 Fastest-Growing Companies List!
    by Daniel Nenni on 10-04-2017 at 7:00 am

    The Semiconductor IP market has always been a big draw for SemiWiki readership and I expect that to continue. One of the more interesting companies we have covered over the past 6+ years is CEVA, who is now on Fortune’s 2017 100 Fastest-Growing Companies List. In fact, CEVA is the ONLY semiconductor IP company on the list and they join semiconductor companies: Silicon Motion, NVIDIA, Cirrus Logic, Micro Semi, Skyworks, and IDT.

    Gideon Wertheizer, CEO of CEVA commented: “Fortune’s acknowledgment of CEVA as one of the fastest growing public companies over the past three years is a testament to our successful expansion strategy which has enabled us to become a technology leader. Our platform IPs for 5G, deep learning, computer vision, voice assistants, Bluetooth and Wi-Fi are critical building blocks for all smart and connected devices. We are very proud to feature on this list alongside some of the world’s most prominent companies.”

    We are at the intersection of three trends that are making Semiconductor IP even more interesting moving forward: The advent of systems companies making their own chips (systems companies now dominate the fabless semiconductor ecosystem and I expect that trend to continue), Artificial Intelligence is a boom to Semiconductor IP (deep learning and computer vision for example), and M&A – the mega acquisition of ARM by Softbank last year followed by the Chinese acquisition of Imagination Technologies this year.

    CEVA stock, by the way, was around $15 when we started covering them and today it is over $40.

    For the record, CEVA is the leading licensor of signal processing IP that is used for: Image and Computer Vision, Deep Learning, Audio, Voice, Speech, and Sensor Fusion. CEVA also provides Wireless Communication, and connectivity IP. The target markets include: Mobile, Wearable, Automotive, Industrial and Consumer IoT, which just about covers every topic on SemiWiki.com, absolutely.

    CEVA is also a very well run company with more than 300 employees in the US, Israel, Europe and Asia. To date more than 8 billion CEVA-powered chips have been shipped worldwide. Given the exploding silicon growth in IoT, Automotive, Robotics, Drones, Mobile, Wearables, and dozens of other vertical markets, we should hit a trillion CEVA-powered chips in the not too distant future.

    Bottom line: Semiconductor IP is a critical enabler of the fabless semiconductor ecosystem.

    About CEVA, Inc.
    CEVA is the leading licensor of signal processing IP for a smarter, connected world. We partner with semiconductor companies and OEMs worldwide to create power-efficient, intelligent and connected devices for a range of end markets, including mobile, consumer, automotive, industrial and IoT. Our ultra-low-power IPs for vision, audio, communications and connectivity include comprehensive DSP-based platforms for LTE/LTE-A/5G baseband processing in handsets, infrastructure and machine-to-machine devices, advanced imaging, computer vision and deep learning for any camera-enabled device, audio/voice/speech and ultra-low power always-on/sensing applications for multiple IoT markets. For connectivity, we offer the industry’s most widely adopted IPs for Bluetooth (low energy and dual mode), Wi-Fi (802.11 a/b/g/n/ac up to 4×4) and serial storage (SATA and SAS). Visit us at
    www.ceva-dsp.com and follow us onTwitter,YouTube andLinkedIn.


    Photonics Summit Delivers High-Bandwidth Discussion on State of Silicon Photonics

    Photonics Summit Delivers High-Bandwidth Discussion on State of Silicon Photonics
    by Mitch Heins on 10-03-2017 at 12:00 pm

    On September 6, 2017, Cadence Design Systems, Lumerical Solutions and PhoeniX Software hosted their second Photonics Summit. As with last year’s summit, this was a two-day event, with the first day including in a myriad of photonics presentations and the second day being a hands-on workshop. The hands-on workshop taught attendees how to use the Cadence, Lumerical and PhoeniX EPDA (electronic/photonic design automation) flow to put together a photonic system comprised of a photonic integrated circuit (PIC), a CMOS ASIC and a laser light source, all within a system-in-package (SiP) configuration using a silicon-based interposer.

    While the hands-on day was very interesting I want to focus on the first day’s presentations. To paint with a broad brush, the overarching theme of the presentations seemed to be that there is an explosion in the breadth of photonics applications and the technology being applied across the entire photonic manufacturing ecosystem. Read on and you’ll see that integrated photonics is opening up applications that up till now have not been feasible with standard electronics.

    In the past, I’ve told people that working on photonics is sort of like going back in time to the early IC days of the 1980’s. I now amend that premise. Instead, it’s like watching history repeat itself but on fast-forward. The progress shown and the number of people working the technology challenges is simply astounding. The collection of presenters at this year’s summit reflected this as they had presentations covering systems applications, wafer manufacturing and process design kits, packaging, automation for assembly and test and new design characterization Figures-of-Merit being applied to keep up with the dramatically increasing bandwidths made available through integrated photonics.

    The first presentation was given by Andrew Wheeler of Hewlett Packard Enterprise (HPE) Labs, entitled ‘Photonics: the fabric of our (future) lives’. That’ a pretty bold statement when you think about it but Andrew unfolded a scenario that we are watching come true even as I post this article. Per Andrew, the amount of data we are processing is changing dramatically.

    As a few examples, in 2016, Facebook members were posting an average of 4 Petabytes (PB) of data a day. That’s the equivalent of 4,000 Terabytes or 4,000,000 Gigabytes. In 2017, Walmart’s daily transaction database reached 40 PB and, in the not too far away future of 2020, it is estimated that automobiles employing driver assistance capabilities will generate around 40,000 PB daily! The advent of the internet-of-things (IoT) will exacerbate this further as we will have around 8 billion people on the planet using roughly 20 billion mobile devices to generate over 100 billion social infrastructure interactions per day using over 1 trillion apps.

    HPE predicts that this data explosion will precipitate a massive change in the way we process data, moving from a processor-centric compute model to one that is a memory-driven compute model. Up till now, this has not been feasible due to bandwidth and latency limitations of electronics technology between processors and memory. Integrated photonics will change this as photonics interconnects eliminate the distance factor between processors and memory, facilitating entirely new compute topologies. It doesn’t stop there though. HPE also envisions photonic-based computing as opposed to simply using photonics as an interconnect fabric.

    Along a similar vein, Darius Bunandar of MIT took photonic computing to a new level with a discussion how photonics enables Quantum computing. Photonics enables the creation of single photon sources and single-photon detectors which are the basic building blocks of Quantum computing. There is still much work to be done but the introduction of integrated photonics has already spawned more than a couple new startups in this space. Darius also gave another example of advanced photonic computing showing the realization of photonic-based deep neural networks (DNNs). These DNNs are the starting point of some very exciting work in artificial intelligence including image recognition and inferencing, running at the speed of light.

    While this all sounds a little like science fiction, the rest of the speakers filled in the gaps for how these PICs will be manufactured, packaged, assembled and tested. Michael Rakowski from imec in Belgium gave an update on their silicon photonic processes that can now readily enable 50G NRZ photonic modulation and detection including how they are now tightly integrating CMOS ICs with PICs using 3D assembly techniques. These are the same techniques that were part of the Summit’s 2[SUP]nd[/SUP] day hands-on training session.

    Paul Fortier of IBM followed with an excellent overview of automated high-throughput integrated photonics assembly capabilities. This presentation was especially interesting to see as one of the drawbacks for integrated photonics so far has been the cost of the PICs. Unlike electronic ICs, the packaging and assembly of photonics onto boards actually represents up to 80% of the overall cost of the PIC solution (just the inverse of electronics). Per Paul’s presentation, one of the key items now being developed is the ability to use existing electronic pick and place technology to assemble PICs on the boards. The tricky part here is that alignment must be precise for photonics to work correctly. IBM sees progress on three fronts, passive alignment of fiber arrays using v-groove technology, the use of self-aligning structures for polymer ribbon connectors and the use of flip-chip technologies that use solder-induced self-alignment. The common component in all of these is the idea of automated (hands-off) alignment of off-chip connections to the PIC.

    Lastly, there were presentations from Dan Neugroschl of Chiral Photonics who work on photonics packaging and test and Pavel Zivny of Tektronix who works on tester and measurement equipment used for photonic testing. Not to beat a dead horse, but both Dan’s and Pavel’s presentations again showed how much infrastructural work has been going on to support integrated photonics. There’s probably two articles worth of information to share from those presentations that will need to wait for another day.

    Suffice it to say, this summit turned out to be a very high-bandwidth presentation on the state of integrated silicon photonics. It was time well spent and I can’t wait to see what will transpire between now and the next summit.

    If you are interested to learn more about what all of these gentlemen presented you can find their presentations at the link below.

    See Also
    Photonics Summit Proceedings
    Cadence, Lumerical, PhoeniX Photonic Offering


    Adoption, Architecture and Origami

    Adoption, Architecture and Origami
    by Bernard Murphy on 10-03-2017 at 7:00 am

    Last week I sat in on Oski’s latest in a series of “Decoding Formal” sessions. Judging by my first experience, they plan and manage these events very well. Not too long (~3 hours of talks), good food (DishDash), good customer content, a good forward-looking topic and a very entertaining wrap-up talk.

    Continue reading “Adoption, Architecture and Origami”


    TSMC Teamwork Translates to Technical Triumph

    TSMC Teamwork Translates to Technical Triumph
    by Tom Simon on 10-02-2017 at 12:00 pm

    Most people think that designing successful high speed analog circuits requires a mixture of magic, skill and lots of hard work. While this might be true, in reality it also requires a large dose of collaboration among each of the members of the design, tool and fabrication panoply. This point was recently made abundantly clear at the TSMC Open Innovation Platform (OIP) Forum held in Santa Clara on September 13th. Indeed, the entire OIP ecosystem was established by TSMC to encourage this kind of collaboration. Over the years it has enabled significant advances in electronic product design and delivery.
    Continue reading “TSMC Teamwork Translates to Technical Triumph”


    eFabless and Silego $15,000 Go Configure Design Challenge Series!

    eFabless and Silego $15,000 Go Configure Design Challenge Series!
    by Daniel Nenni on 10-02-2017 at 7:00 am

    The eFabless and Silego “Go Configure Design Challenge Series” is the first of its kind to allow a global community of designers to implement widely used functions using GreenPAK™ Configurable Mixed-signal ICs (“CMICs”) and its intuitive drag-and-drop software GUI. The efabless platform will serve as the crowd source design platform on which the CMIC hardware designs will be submitted. This design challenge is intended to be the first step in establishing a future marketplace for innovators and their designs.

    For more information here is a CEO interview with John Teegen of Silego Technology and Mike Wishart of efabless. John and Mike discuss community design of Silego Configurable ICs and the Go-Configure Design Challenge.

    Hi John and Mike. John, tell our readers a bit about Silego.
    JT: Dan, we may be one of the more impactful under-the radar companies that you will ever see. In fact, Semico called Silego the “best kept secret in Silicon Valley”. We pioneered and are the market leader in Configurable Mixed-signal ICs, or CMICs, and we have shipped over 3 billion devices since their introduction. You can think of CMICs as bringing the convenience of FPGA’s to mixed-signal. Each device contains analog components, discrete digital logic, and power components that can be integrated through software into highly configurable, small, easy to use, low cost ICs. Customers get faster time to market, reduced system parts count, lower power consumption, less board space and reduced BOM costs.

    Six generations of CMICs have been introduced, with increasing functionality and design tool enhancements. The design process of a CMIC is now extremely intuitive and very comparable to designing circuits on PCBs. With minimal training, a wide variety of designers can now design their own CMIC with no NRE or production commitment.

    That is why we are excited about the Go Configure Design Challenge Series and partnership with efabless. This was the brainchild of Mike Noonen, Silego’s Vice President of Sales and Business Development, in collaboration with Mike Wishart and Mohamed Kassem, co-founder of efabless. The objective is to educate and enable an engaged community on the efabless platform that can respond to design requests from customers of all sizes and open the IoT market to the power of Silego mixed-signal-on-demand. We see it as a key step in growing our business and introducing a better way to design to thousands of designers worldwide.

    Mike, bring us current on efabless.
    MW: As you recall, efabless.com is the world’s first community engineering platform for electronics solutions. We connect a global community of mixed-signal designers with customers and enable them with processes and a unique community-centric marketplace to develop, share and commercialize products. We introduced our solution for community created IP with a design challenge for X-FAB, our foundry partner, last November. We released our community created IC platform in June. With Silego we now offer community development of programs, we call “soft designs”, for configurable IC parts. The Challenge Series is the first step in our support of Silego and configurable ICs. We will also provide the marketplace to connect designers with opportunities and to showcase their designs.

    How doesthe Silego partnership fit into your model?
    MW: We are very excited about our relationship with Silego. This is a strong validation of the principle of community design and the Go Configure Design Challenge is a first of its kind for the sector. Remember, we founded efabless on the principle that a connected and a collaborative community of highly skilled innovators is a catalyst for IoT and smart hardware to reach its fullest potential. IoT and smart hardware products are often created by companies that do not have the internal core expertise in electronics development or expertise in a very specific area of hardware design. An example would be a shoe company making a Bluetooth connected running sneaker. This new class of innovators needs a broad community with the time and resources to turn an idea into a product. And, in particular, they need analog and mixed-signal to connect the digital “smarts” of their products with the physical world. That’s where Silego-on-efabless comes in.

    JT: The Silego GreenPAKs are a terrific solution for community design. They are easy to learn and easy to use. With our devices, the efabless community can offer Configurable Mixed-signal IC solutions with incredibly fast time to market for a wide range of applications.

    MW: Silego also greatly expands the community of designers beyond the universe of analog and mixed-signal IC designers to PCB and other system level engineers.

    Tell us about the Go Configure Design Challenge
    JT: The Go Configure Challenge is obviously a play on words – effectively who would have thought that designers from around the world could learn our platform, create designs and get global recognition for doing so. Oh, yes, and win prizes. In the Challenge Series, we will be tasking the community with designing various industry standard functions and the designs will be judged by Silego on the quality of the design and the documentation. We have chosen 10 separate industry standard functions at three levels of complexity: easy, moderate and difficult.

    We will present the Go Configure Design Challenge Series on efabless in five pairs of two challenges each, beginning on October 2[SUP]nd[/SUP] and continuing until mid-December. Each Challenge will be open for two weeks. We will offer prizes like smartwatches and Bluetooth speakers for the highest quality designs for each separate challenge. Each separate challenge will also pay out Time-To-Market cash awards for the first three entries that meet a high, commercially acceptable, standard of quality – we recognize that successful community design requires both speed and quality. Finally, we will keep a running tabulation of scores across all challenges and present a grand prize to the overall challenge winner.

    How do people register and compete?
    MW: It all works very easily. Designers come to efabless and check out the “Go Configure Design Challenge Series” page. This will provide access to all the details on the Challenge Series as well as links to information on each separate challenge in the series. To participate, the designer registers on efabless and then reviews and selects a challenge or challenges. We also provide access to training videos authored by Silego engineers. We think participants will be pleased to see how easy it is to learn the GreenPAK development environment and become effective.

    What happens after the Challenge? How can this new-found design talent be utilized?
    MW: We are very excited about making Silego GreenPAKs available to our community as a key capability on the efabless platform. Community members will be able to create personal profiles that include their GreenPAK accomplishments and are searchable by customers and other community members. Potential customers will be able to search for GreenPAK talent or request designs. Community members will also be able to create their own unique designs and present them in a very protected way, with application notes and data sheets, in our marketplace.

    We look forward to seeing community innovation on Silego GreenPAKs and encourage your readers to sign up and get started with the Go Configure Challenge.