RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Enabling Complex System Design Environment

Enabling Complex System Design Environment
by Alex Tan on 08-15-2018 at 12:00 pm

Deterministic, yet versatile. Robust and integrated, yet user-friendly and easily customizable. Those are some desirable characteristics of an EDA solution as the boundaries of our design optimization, verification and analysis keep shifting. A left shift driven by a time-to-market schedule compression, while the process and application complexities keep pushing it in the opposite direction.

From the many DAC held technical sessions, early verification has made progress in doing shift-left to keep pace with the implementation process by means of integrating the application or end-product software within the virtual prototyping to do an early system design exploration. Virtual prototyping allows designers to not only explore corner scenarios but also to reproduce the experiments with various permutations of constraints or variants. The more heterogeneous SoCs for the emerging applications demand virtual prototyping that supports not only software and hardware but also the incorporation of digital, analog and interconnect IPs.

Magillem provides robust front-end design XML based solutions that enable and streamline design activities around its integrated environment. It has deployed its products across several industry boundaries –from SoC design houses, semiconductor manufacturers to legal and technical documentation publishers.

Since rolling out its Magillem Architectural Intent (MAI) for architectural intent capture as covered in my prior blog, the company has announced a joint effort with Imperas for an integrated virtual prototyping platform and also introduced Magillem Flow Architect (MFA), a turnkey solution that help customers define their best recommended flow.

At DAC 2018, I had the opportunity to interview Magillem CEO, Isabel Geday, and Magillem VP of Strategic Account Manager, Paschal Chauvet. The discussion was centered around Magillem continued efforts to accommodate IC design needs and how it adapts with the current trends in the EDA ecosystem.

Some EDA players have announced their product collaborations. Does Magillem have similar efforts?
“I call it a partnership, by not creating duplicated solutions,” said Isabel diplomatically. She gave example of Magillem’s earlier partnership with Imperas using its Verification IP models and debugging software. At DAC, Magillem announced another partnership with Arteris IP, an indisputable leader in NOC-IP that provides SoC cache interconnect solution.

The integration with Arteris IP was demonstrated by a full-compliance validation of the company’s interconnects with the Magillem environment. Using a single design environment, customers can now easily build a SoC using Arteris’ IP instances (FlexNoC and NCore) and the Magillem front-end design environment (MAI, MPAand MRV). “It is very good for customers to have one design environment. To be able to work with and plug all the IPs,” she added.


Since Magillem is based on the industry standard IP-XACT, it enables a possible integration of other tools into its environment. “While the other big players have closed environments, for Magillem it is the DNA of our product to allow integration,” she pointed out. Furthermore, The unified environment also provides more efficient and automated sharing of information across the supply chain during the product development.

There are increasing AI and ML related efforts. How do these impact your products?
“This is very interesting question as we work with methodology and flow aspects,” Isabel said. She gave two examples. The first is from a product application stand point. A large customer has used Magillem solution on some kind of expert system, which interacts with engineers through questionnaires and depending on the given answers make decisions one way or another.

The second example is related to Magillem prior internal effort. “We had activities on the side, done several years ago to demonstrate how versatile is our platform, by building something using the assembly of metadata of descriptions.” She elaborated that the team applied some AI aspects to analyze the interpreted legal texts and the impact of changes made on the existing document corpus. It measured the impacts of the new text fragments on the existing ones and suggested changes when it’s necessary on the existing document corpus. In addition, it was capable of learning a new syntax, when it did not recognize a new pattern. To sum up, she believes that AI is more a replay of previously exercised concept, but with more memories, compute power and algorithms involved.

Aside from these examples, the recently announced Arteris IP-and-Magillem integrated solution has been targeted to simplify the increasingly complex SoCs designs for AI and autonomous driving applications, which are now bounded by the latency of on-chip interconnects rather than the performance of on-chip processors and hardware accelerators.

Here at DAC, more EDA vendors showcasing their products to be accessible on the cloud. What is your take on cloud deployment?
“Our customer are Tier-1 companies and they have entrusted us with their most complex, expensive, demanding SoC platform and designs,” said Isabel. She added that the customers policy is to keep confidentiality as a top-priority. She acknowledged that although some design data intelligence may benefit from cloud based scenario, cloud is not an option yet. “People gain ownership on this internal solution. It is less interesting idea to us than in providing a cognitive assistance to the experts. The customers are very good on what they do. They have to be the decision maker…to make fast decision and be more productive as they deal with huge legacy and data.”

What is your data model? Do you allow customer flow customization?
“Our solution was directly derived from IP-XACT, which is universal inside our tool, allowing our customers to use one data model for the entire design flow,” said Paschal. Embedding external tools can be achieved through the Eclipse plug-in and TGI (Tight Generator Interface), the standard API to manipulate any IP-XACT database. According to Paschal, such flexibility is crucial for smaller companies as they tend to highly customize their environment. The scalability of the compact data model is not an issue as Magillem have worked with SoC having millions of gates.

Traceability is about the ability to track the safety requirements from the initial design inception through its implementation and operation phase. It is a key ingredient for the functional safety standards compliance as defined in IEC 61508 and ISO 26262. With the parsable IP-XACT based data, automated traceability throughout the development flow can be achieved.

Commenting on future works, Isabel stated that the current Magillem platform offering is unique. “Our earlier vision is now very appealing to new customers and new markets,” she added. Ongoing works includes the infrastructure to build the hub of link that will guarantee traceability in a very elegant way. She added that instead of building a hub of data, one then could add different standards and other sort of data while preserving all the essential elements.

By providing a versatile framework that could be retargeted for complex system designs, Magillem solution enables design teams to adapt with changing requirements from both design specifications and implementation methodologies.

For further info on Magillem products, please check HERE.


What Silicon Valley still gets wrong about innovation

What Silicon Valley still gets wrong about innovation
by Vivek Wadhwa on 08-15-2018 at 7:00 am

Silicon Valley well exemplifies the saying “The more things change, the more they stay the same”. Very little has changed over the past decade, with the Valley still mired in myth and stale stereotype. Ask any older entrepreneurs or women who have tried to get financing; they will tell you of the walls they keep hitting. Speak to VCs, and you will realize that they still consider themselves kings and kingmakers.

With China’s innovation centers nipping at the Valley’s heels, and with the innovation centers that Steve Case calls “the rest” on the rise, it is time to dispel some of the myths by which it operates.

Myth #1. Only the young can innovate

The words of one Silicon Valley VC will stay with me always. He said: “People under 35 are the people who make change happen, and those over 45 basically die in terms of new ideas”. VCs are still looking for the next Mark Zuckerberg.

The bias persists despite clear evidence that the stereotype is wrong. My research in 2008 had documented that the average and median age of successful technology company founders in the U.S. is 40. And several subsequent studies made the same findings. Twice as many of these founders are older than fifty as are younger than 25; twice as many, over sixty as are under twenty. The older, experienced, entrepreneurs have the greatest chances of success.

Don’t forget that Marc Benioff was 35 when he founded Salesforce.com; Reid Hoffman, 36 when he founded LinkedIn. Steve Jobs’s most significant innovations at Apple — the iMac, iTunes, iPod, iPhone, and iPad — came after he was 45. Qualcomm was founded by Irwin Jacobs, when he was 52, and by Andrew Viterbi, when he was 50. The greatest entrepreneur today, transforming industries including transportation, energy, and space, is Elon Musk; he is 47.

Myth #2. Entrepreneurs are born, not made

There is a perennial debate about who can be an entrepreneur. Jason Calacanis proudly proclaimed that successful entrepreneurs come from entrepreneurial families and start off running lemonade stands as kids. Fred Wilson blogged about being shockedwhen a professor told him that you could teach people to be entrepreneurs. “I’ve been working with entrepreneurs for almost 25 years now,” he wrote, “and it is ingrained in my mind that someone is either born an entrepreneur or is not.”

Yet my teams at Duke and Harvard had documented that the majority, 52 percent, of Silicon Valley entrepreneurs were the first in their immediate families to start a business. About 39 percent had an entrepreneurial father, and 7 percent had an entrepreneurial mother. (Some had both.) Only a quarter of the sample we surveyed had caught the entrepreneurial bug when in college. Half hadn’t even thought about entrepreneurship even then, and they had had little interest in it when in school.

Useful specific instances are the backgrounds of Mark Zuckerberg, Steve Jobs, Bill Gates, Jeff Bezos, Larry Page, Sergey Brin, and Jan Koum. They didn’t come from entrepreneurial families. Their parents were dentists, academics, lawyers, factory workers, or priests.

Anyone can be an entrepreneur, especially in this era of exponentially advancingtechnologies — in which a knowledge of diverse technologies is the greatest asset.

Myth #3. Higher education provides no advantage

Peter Thiel made headlines in 2011 with his announcement that he would pay teenagers $100,000 to drop out of college. He made big claims about how these dropouts would solve the problems of the world. Yet his foundation failed in that mission and quietly refocused its efforts and objectives to provide education and networking. As Wired reported, “Most (Thiel Fellows) are now older than 20 and some have even graduated college. Instead of supplying bright young minds with the space and tools to think for themselves, as Thiel had originally envisioned, the fellowship ended up providing something potentially more valuable. It has given its recipients the one thing they most lacked at their tender ages: a network”.

This came as no surprise. Education and connections are essential to success. As our research at Duke and Harvard had shown, companies founded by college graduates havetwice the sales and twice the employment of companies founded by others. What matters is that the entrepreneur complete a baseline of education; the field of education and ranking of the college don’t play a significant role in entrepreneurial success. Founder education reduces business-failure rates and increases profits, sales, and employment.

Myth #4. Women can’t succeed in tech

Women-founded firms receive hardly any venture-capital investments, and women still face blatant discrimination in the technology field. Despite the promises of tech companies to narrow the gap, there has been insignificant progress.

This is despite the fact that according to 2017 Census Bureau Data, women earn more than two-thirds of all master’s degrees, three-quarters of professional degrees, and 80 percent of doctoral degrees. Not only do girls surpass boys on reading and writing in almost every U.S. school district, they often outdo boys in math—particularly in racially diverse districts.

Earlier research by my team revealed that there are also no real differences in success factors between men and women company founders: both sexes have exactly the same motivations, are of the same age when founding their startups, have similar levels of experience, and equally enjoy the startup culture.

Other research has shown that women actually have the advantage: that women-led companies are more capital-efficient, and venture-backed companies run by a woman have 12 percent higher revenues, than others. First Round Capital found that companies in its portfolio with a woman founder performed 63 percent better than did companies with entirely male founding teams.

Myth #5. Venture Capital is a prerequisite for innovation

Many would-be entrepreneurs believe that they can’t start a company without VC funding. That reflected reality a few years ago, when capital costs for technology were in the millions of dollars. But it is no longer the case.

A $500 laptop has more computing power today than a Cray 2 supercomputer, costing $17.5 million, did in 1985. For storage, back then, you needed server farms and racks of hard disks, which cost hundreds of thousands of dollars and required air-conditioned data centers. Today, one can use cloud computing and cloud storage, costing practically nothing.

With the advances in robotics, artificial intelligence, and 3D printing, the technologies are becoming cheaper, no longer requiring major capital outlays for their development. And if entrepreneurs develop new technologies that customers need or love, money will come to them, because venture capital always followsinnovation.

Venture Capital has become less relevant than ever to startup founders.

For more, follow me on Twitter: @wadhwa and visit my website: www.wadhwa.com


Meeting Analog Reliability Challenges Across the Product Life Cycle

Meeting Analog Reliability Challenges Across the Product Life Cycle
by Daniel Payne on 08-14-2018 at 12:00 pm

Create a panel discussion about analog IC design and reliability and my curiosity is instantly piqued, so I attended a luncheon discussion at #55DAC moderated by Steven Lewis of Cadence. The panelists were quite deep in their specialized fields: Continue reading “Meeting Analog Reliability Challenges Across the Product Life Cycle”


Architecting an ML Design

Architecting an ML Design
by Bernard Murphy on 08-14-2018 at 7:00 am

Discussion on machine learning (ML) and hardware design has been picking up significantly in two fascinating areas: how ML can advance hardware design methods and how hardware design methods can advance building ML systems. Here I’ll talk about the latter, particularly about architecting ML-enabled SoCs. This approach is getting major traction for inferencing applications particularly when driven by power considerations (eg in the IoT and increasingly in automotive apps), also in training when driven by demand for highest performance per Watt (eg Google TPU).

Architecture has to be the center of design in such a system, optimizing the algorithm and architecture for your CNN. For the core algorithm, this has to be in choice of number, types (convolution, pooling, …) and characteristics (stride, …) of layers in the network. In the implementation, memory support is one critical consideration. Neurons like tightly-coupled memories for weights, activation functions and retrieving and storing neuron inputs and outputs, but everything can’t be tightly-coupled so you also need high-bandwidth access to larger memories – like caching but implementation can be quite different in CNNs. And you want to optimize PPA, aggressively if the design is targeted to a battery-operated application.

There are challenges in finding an optimal architecture for these systems. RTL isn’t an option – you need to test out different options with 100’s of MB of data (images as an obvious example) so this has to run at a higher level of abstraction. And it’s not obvious (to me at least) how you would experiment with CNN architectures in a JIT-type virtual-prototype running on a general-purpose processor (or even a cluster). This class of architecture design seems a much better fit with TLM approaches, for example Platform Architect Ultra.

You’ll likely start in one of the standard CNN frameworks (Caffe, TensorFlow, etc) where most experts in the field are familiar with building CNN graphs. You can translate this into a workload model in Platform Architect (Synopsys calls this a task graph). This can then be mapped into an IP in your larger SoC model: CPUs, memory interfaces and on-chip bus. Naturally the platform supports a rich library of IPs and VIPs for this task:

  • Processors – Arm, Synopsys ARC, Tensilica and CEVA DSP, traffic generators and more
  • Memory subsystem – DDR and multiport memory controllers from Arm and Synopsys
  • Interconnect models from Synopsys, Arteris IP, Arm and NetSpeed

Architecting the bulk of the SoC is pretty familiar; what is different here is designing the task graph for the CNN, or adapting it from an imported graph. I don’t want to get too much into the details of exactly how this works – you should follow the webinar link at the end to get a more complete description. But I think it is useful to highlight some of the key points for those of us who think more in terms of conventional logic.

Since you’re likely working with an imported CNN task graph, you probably need to build sub-components which will ultimately connect to TLM models, starting for example with a convolution layer in which you need tasks like read_input (reading an image sub-block), read_coefficients (e.g. for weights), process the neural function and write_result. The tool supports this through creation of a task components as blocks which you then connect to create a graph, indicating serial versus parallel processing. You also add parametrization to tasks and connections for things like height and width of a frame and the stride (how much the filter shifts on the input map per convolution).

You can them simulate this model and sweep over different scenarios, looking at basic latencies and utilization of resources, and fine-tune and connect this as a hierarchical sub-component in the larger task graph. You will also build a structure to support the main body of your SoC (this is standard Platform Architect stuff), into which your CNN task graph is instantiated.

From here you start mapping between tasks and TLM models in the library – connecting your task-graph to a TLM implementation. Now when you simulate with scenarios, you get more realistic info on latency, throughput, utilization and so on. You can also do system-level power analysis at this stage by adding UPF3 power monitors (state machines, bound to each TLM model, with estimated power per state. Ah-hah – so that’s where these UPF3 monitors are used 😎)

The presenters (Malte Doerper – PMM, and Tim Kogel – AE manager, both in the virtual prototyping group) show application of these concepts to an AlexNet case study (AlexNet is the reference CNN benchmark everyone uses). Their goal on starting is to process a frame in 10ms with 4 frames processing in parallel, and to consume less than 500mW. They show a nice analysis, based on running over multiple scenarios where they can compare implementation choices against meeting the performance spec, power and energy (the old trick – run faster, higher power but lower energy). And of course they meet their power and performance goals!

I found this a pretty interesting exposition of how CNN design can fit comfortably into the full SoC architecture goal. Modeling convolution layers through tasks may look a little different, but it all seems to flow quite smoothly into full architecture creation. There’s a lot more detail in the webinar, along with a quite detailed Q&A. You can learn more HERE.


A True Signoff for 7nm and Beyond

A True Signoff for 7nm and Beyond
by Alex Tan on 08-13-2018 at 12:00 pm

The Tale of Three Metrics
Meeting PPA (Performance, Power and Area) target is key to a successful design tapeout. These mainstream QoR (Quality of Results) metrics are rather empirical yet inter-correlated and have been expanded to be linked with other metrics such as yield, cost and reliability. While the recent CPU performance race is less intense as Moore’s Law based scaling is increasingly costlier and more complex, power is taking the center stage. This has been intensified by the proliferation of more silicon geared toward addressing mobility (automotive, wireless-augmented- anything), distributed applications (Internet-of-Things) or scalable computing (multi-core).

Many of the adopted low-power implementation flows have shown that a holistic approach is necessary in order to achieve an optimal power target. It must also be initiated early at the architectural level such as addressing multiple power domains and clock domain partitioning, followed by power optimization at different level of the implementation stages. Concurrently, an accurate power analysis is required to provide feedback for needed adjustments to the optimization constraints as tradeoff recurrences are expected among these metrics.

Power Signoff Challenges
Power signoff checks power integrity of the grid. According to Jerry Zhao, Cadence Product Director, today designs can be categorized into two types for the purpose of analyzing the power grid constructions and identifying its challenges. The first design type demands capacity as it contains billions of instances and nodes such as the power-hungry oriented GPUs (Graphical Processing Unit) or CPUs for machine learning. The other is small in size but more sophisticated, which requires multiple power domains (such as IoT related chips) or special needs (such as packaging integrated analysis for automotive) –for this type, a higher analysis accuracy is needed.

Additionally, with the advanced nodes such as 7nm and 5nm FinFETs where metal resistance is more pervasive, higher correlation accuracy is also required between the implementation estimates and the analysis results generated by the signoff tools. In flow utilizing a traditional signoff approach, non-converging iterations are common occurrences as it relies on the use of design margins as well as due to a disconnect in either the underlying data model or the optimization/analysis engine of the associated point tools.

Cadence True Signoff Solution
As part of a tapeout signoff, design teams perform various validation including physically related verification (DRC, LVS, reliability), timing and power. While it is common to have crosstalk and SI (Signal Integrity) effects concurrently done with timing analysis, power verification (related to IR drop, electromigration/EM) is traditionally decoupled from timing analysis, in spite of the fact that an IR induced lower power biasing and the remedy to an EM avoidance may incur significant timing differentials that could change the criticality of a timing path.

Driven by such tighter correlation needs especially in the advanced nodes domain, Cadence launched project Virtus (Voltus IR drop TempUS technology) which integrates the power and timing analysis to yield a true signoff solution. It had also been aligned with the existing Cadence digital implementation flow for both QoR predictability and convergence. The overall solution is dubbed full-flow digital.

As discussed during Cooley’s DAC 2018 Troublemaker Panel, Jerry shared a customer case involving a high frequency design. The 3Ghz design passed signoff using other third party power and timing tools, but failed to perform on silicon at the targeted frequency by several hundreds of Mhz. This design was then subjected to true signoff analysis, which was able to uncover the presence of IR drop violations that could induce timing violations comparable to the earlier post-silicon observation.

How much impact does an IR drop violation have on timing? At the DAC theater presentation, Cadence showcased such IR induced timing violations on a 2.5Ghz, 7nm CPU based design testcase. A non-critical path having +31ps slack and passing power integrity analysis with +42mV margin, was identified with true signoff as failing timing by -33ps due to the presence of proximity aggressors (equivalent to a 8% slow down in speed). This path interestingly was not one of the top-paths, instead, it was buried in the deeper non-critical path bin.


Speed and Parallelism

Since scalability and performance are needed for power integrity analysis, Cadence has recently rolled-out Voltus-XP, enhanced with extensive parallelism algorithm to support power grid signoff on giga-scale designs with massively distributed processing. It is cloud ready and provides up to 5x speedup.

Full-Flow Digital Solution
The tight handshake exemplified between Voltus and Tempus seems to be just one of the many close interactions among Cadence tools as shown in Full-Flow Digital Solution diagram, which is inline with this year Cadence’s slogan of being a system design enabler.

“Design closure is tightly correlated with how a cell is placed and routed as it impacts how current flows through the placed region and thus, influencing the IR drop,” stated Jerry referring to Voltus-Innovus IR-drop-aware placement fixing.

As one move towards the top of the die, packaging has its own IR drop requirements that are different than chip level. It is more thermal centric and involves gradual change compared with quick voltage ramp. A similar handshake with packaging analysis is also needed. Designer is expected to iterate through the tools in the ecosystem though not required to be concurrent, in order to ensure a QoR convergence.

In addition to IR drop check, for 7nm and 5nm power integrity, meeting foundry driven EM rules requires a year-round team collaboration which culminates in passing foundry certification process.

Signoff Solution for The Advanced Nodes
Asked on his take with the adequacy of Cadence current tool offerings for 7nm or 5nm signoff, Jerry stated that implementation step could further leverage the analysis outcome. “When Voltus is running it will report hot-spot areas (i.e. with IR drop error) understood by Innovus and used to do IR aware placement fixing,” stated Jerry. The tool is smart enough to move the aggressor by a few rows based on a cost function. A rerun of Voltus is needed to ensure it is fixed. Using such approach designer can reduce the IR drop by 30% in one iteration. With multiple iterations, it would resolve significant number of IR related issues although it may not fix the whole problems as there are some designer imposed constraints such as not allowing tool to touch the clock trees.

Addressing it from another angle has been demonstrated through the use of Tempus timing signoff tool. Jerry said that as Voltus and Innovus shared the same database (data model), an integrated Tempus-Voltus timing analysis can be done and an ECO based on voltage report can be generated to fix timing violations.

To recap, both timing and power signoffs have become increasingly longer with more complex designs and advanced nodes. Cadence integrated signoff solution not only provides multi-dimensional analysis but also a tightly coupled solution with the optimization based tools to alleviate signoff bottlenecks.

For more info on Voltus please check HEREand Cadence silicon signoff HERE


SEMICON West – Leading Edge Lithography and EUV

SEMICON West – Leading Edge Lithography and EUV
by Scotten Jones on 08-13-2018 at 7:00 am

At SEMICON West I attended the imec technology forum, multiple Tech Spot presentations and conducted a number of interviews relevant to advanced lithography and EUV. In this article I will summarize what I learned plus make some comments on the outlook for EUV.
Continue reading “SEMICON West – Leading Edge Lithography and EUV”


Are We Over Uber? Bring on the Bots

Are We Over Uber? Bring on the Bots
by Roger C. Lanctot on 08-12-2018 at 12:00 pm

From sexual harassment, to surveilling regulators, to Uber drivers and taxi drivers committing suicide (because they can’t make a living) the pervasive creepiness of Uber continues to spread while the means of corraling this societal phenomenon creeps steadily forward like sclerotic mid-town traffic. The latest chapter in the Uber saga is unfolding in New York where court rulings (in line with similar rulings in California) may force gig drivers to be treated as employees just as the New York City Council is considering freezing for-hire licenses to combat negative urban traffic consequences.

Multiple studies show the onset of ride hailing apps such as Uber, Lyft, Via, Yandex, Grab and others around the world are driving up urban traffic congestion and undermining public transportation. The latest moves in New York, however, highlight the favor felt by under-served minority communities when it comes to fulfilling their ad hoc local transportation needs. In New York, and elsewhere, traditional Yellow Cabs have a reputation of insufficiently servicing minority riders (i.e. not stopping to pick them up) and their communities (i.e. refusing to drive to particular neighborhoods).

In New York, civil rights organizations and leaders including the New York Urban League and the Reverend Al Sharpton’s National Action Network have come out in opposition to the freeze on for-hire licenses and in support of Uber. These two currents perfectly capture the conundrum of a service that undercompensates drivers, many of whom are immigrants from minority communities, while providing superior service to those same minority and immigrant communities.

Let’s not get carried away, though, because Uber (and Lyft etc.) drivers are also well known for choosing not to accept certain fares (even though they may suffer app-based consequences). It’s not a perfect solution.

The New York Times chimed in with its own editorial solution of a minimum wage for gig economy drivers and a hike in taxi fares along with congestion charging for drivers of privately owned vehicles entering the city. The Times would prefer to see some balance restored in the transportation network between individual vehicles and public means of transportation.

That’s right. Autonomous taxis can be more effectively managed and regulated and won’t discriminate regarding passengers or destinations. A recently published study modeling the creation of an automated taxibot-centric transportation network in Lisbon, Portugal, found that 90% of cars could be eliminated and commute times reduced though traffic would remain depending upon the degree of reliance on the taxibots.

– Urban Mobility System Upgrade – International Transportation Forum

The pressure is growing on Uber et. al. to compensate drivers fairly and treat them as employees. Uber continues to be forced out of particular cities and countries – or to sell off its assets to local competitors.

The resistance to freezing for-hire licenses in New York City is a recognition that Uber and competing services have a significant driver turnover problem – as well as a surfeit of drivers only working part-time. Freezing licenses could actually improve the compensation picture for the remaining drivers, but might limit the availability of the service.

But all of this regulatory and legal attention reflects the reality that the Uber model works against both the gig drivers and the regulated and licensed taxi drivers. What is worse is that taxi operators around the world have demonstrated an almost comprehensive inability to compete – dependent as they have become on regulated fares and licensing that render them more or less defenseless.

The combination of regulatory and growing urban gridlock have set the stage for a combined downward spiral in the quality of service for taxis, ride hailing services and public transportation. What looks to an Uber et. al. passenger as a win-win is actually a lose lose.he only answer, it seems, is taxibots. So, I guess that means we’re facing another 5-10 years of sexual harassment, regulatory surveillance and taxi (and Uber) driver suicides along with declining public transit performance. It’s a small price to pay for a cheap cab ride, or is it?


Florida Sends Mixed Autonomous Message

Florida Sends Mixed Autonomous Message
by Roger C. Lanctot on 08-12-2018 at 7:00 am

Florida is poised to surpass California and Arizona as the leader in autonomous driving development thanks to an aggressive legislative agenda that began in 2012 and direct engagement with autonomous car developers. The state has good reasons for fostering automated driving given that it is the second largest state in the U.S. with the third highest number of highway fatalities.

These Florida realities and others were brought to my attention by a member of the Florida Chamber of Commerce who I met earlier this year. The Chamber of Commerce is at the heart of the “Autonomous Florida” effort.

There are many motivations for stimulating autonomous vehicle development in a state where the driving environment is heavily influenced by the behavior of aging drivers and tourists. By many measures Florida is the number one tourist destination in the U.S. and Florida is a leader in highway fatalities involving senior citizens.

SOURCE: TRIP Transportation Research

In spite of all this momentum and motivation, The Florida Chamber of Commerce, which has been a leader in the autonomous driving effort, held a Webinar recently that focused on potential job losses and other negative economic impacts from autonomous driving. The Webinar also highlighted varying levels of ambivalence toward the technology from different demographic segments.

The Webinar embodied all the worst boogieman concerns that autonomous driving technology inspires including fear of the vehicles themselves and fear of near-catastrophic job losses from their deployment. The gloomy Webinar contrasts mightily with the otherwise autonomous tech booster-ism of the Florida Chamber.

In fact, Florida is poised to add multiple new autonomous vehicle partners including Cruise Automation and Waymo in coming months, vaulting the state into contention with California and Arizona for leadership in the segment. Florida has done much to lay the ground work including a focus on funding infrastructure investments and setting up multiple smart city and sustainable community initiatives.

The onset of Florida’s leadership is notable as it becomes the third fair weather state – fourth if you include Nevada – to embrace autonomous vehicle tech. Florida is also following the lead of Arizona in clearing away regulations to open up its roads to driverless, steering wheel-less, and brake-and-accelerator-pedal-less automated vehicles.

This is important in the context of countries around the world that are attempting a top-down approach to simultaneously fostering and regulating autonomous vehicles. The message from Florida (and Arizona and Virginia and a growing roster of U.S. states) is: Less is more.

Regulators and legislators are more likely to get things wrong or narrow development options by intervening too aggressively. The onset of autonomy does indeed have implications for infrastructure, employment, productivity and consumer behavior. But regulators and legislators are unlikely to “guess right” when it comes to specifying how technologies are deployed or which technologies are acceptable.

Safety and security are key issues and standards-setting organizations are best equipped to address these concerns. But just as we are exploring allowing drivers to take their hands off the steering wheel, it is probably best that regulators and legislators keep their hands off the technology.

Legislation before the U.S. Congress intended to regulate autonomous vehicle deployments and override state authority is currently stalled, and rightly so. The implementation of autonomous vehicle technology will mean different things in different regions and, thus far, there is no single clear path to autonomy.

California has implemented an array of licensing and data collection obligations. Florida and Arizona, in contrast, do not. Only time will tell which path forward is most attractive, acceptable or swiftest – to say nothing of safest.

States are closer to the action and better able to respond than the Federal government. The National Highway Traffic Safety Administration in the U.S. Department of Transportation has thus far demonstrated it is adequately up to the task of providing basic regulatory support within existing rules. The Society of Automotive Engineers has made its standards-setting contribution and is working closely with the Florida Chamber on autonomous vehicle testing and development.

A study from Germany-based research firm Progenium conferred autonomous vehicle development leadership upon the U.K. based on 23 factors as part of an Autonomous Driving Index. U.K. panelists at a recent London mobility event agreed with those findings in the context of the U.K.’s top-down oversight of autonomous vehicle development.

As for me, I prefer what, to Europeans, must look like a highly chaotic autonomous vehicle development environment ruled, for now, by individual states with some light Federal regulatory oversight. Autonomous driving is a local phenomenon ruled by varying local driving regulations and demographic and meteorological conditions.

Even individual states can get things wrong – particularly when rules become too intrusive. My favorite absurd rule is California’s special licensing for “drivers” of autonomous vehicles. Wait. What? It makes no sense to me.

Florida is on the right path with the right idea. But Florida needs to set aside the confusing messaging around lost jobs and fear of autonomous cars. Autonomous vehicles will create employment, stimulate the economy and change the way people think about transportation generally and driving in particular – and all of that is for the greater good.


eSilicon and SiFive partner for Next-Generation SerDes IP

eSilicon and SiFive partner for Next-Generation SerDes IP
by Daniel Nenni on 08-10-2018 at 12:00 pm

While writing “Mobile Unleashed: The Origin and Evolution of ARM Processors In Our Devices” it was very clear to me that ARM was an IP phenomenon that I did not believe would ever be repeated. Clearly I was wrong as we now have RISC-V with an incredible adoption rate, a full fledged ecosystem, and top tier implementers which now includes eSilicon.

I spoke to Mike Gianfagna and Lou Turnullo from eSilicon about their recent announcement. A small world story, Mike and I worked together at Zycad and Lou and I worked together at Virage Logic. Virage took over the Zycad building so my commute did not change nor did the color of my office. Mike and Lou are very approachable guys with a wealth of experience so if you have the opportunity, definitely approach them.

This announcement is really about SerDes which has changed quite a bit over the years. You will be hard pressed to find a leading edge chip without SerDes inside so this is a big semiconductor deal, absolutely. Earlier SerDes were analog. In an analog SerDes, all characteristics of the SerDes are “hard coded” and cannot be changed. Newer SerDes are more complex and must operate in a wider variety of system configurations. These SerDes are often DSP-based vs. analog.

The eSilicon SerDes is DSP-based. With this architecture you can control characteristics of the SerDes via firmware. This is what the SiFive E2 embedded processor is used for. New SerDes must operate in a wide variety of system configurations – backplane configurations, temperature/humidity extremes, connector types. All this requires configuration of the SerDes equalization functions so the SerDes will match its operating environment so it can deliver the best power and performance.

Bottom line:
DSP-based SerDes can be “tuned” to the operating environment whereas analog SerDes cannot. Another interesting application is continuous calibration which is also possible, where the SerDes can be tuned and re-tuned over time to adapt to changes in the operating environment and even changes in the SerDes itself as it ages.

eSilicon and SiFive put out a good press release so I have included it here. You can read more about the SiFive E2 Core HERE.

eSilicon Licenses Industry-Leading SiFive E2 Core IP for Next-Generation SerDes IP
Configurability of industry’s lowest-area, lowest-power core provided optimal solution for eSilicon

SAN MATEO, Calif. and SAN JOSE, Calif. – Aug. 7, 2018SiFive, the leading provider of commercial RISC-V processor IP, and eSilicon, an independent provider of FinFET-class ASICs, market-specific IP platforms and advanced 2.5D packaging solutions, today announced that, after extensive review and testing of available options in the market, eSilicon has selected the SiFive E2 Core IP Series as the best solution for its next-generation SerDes IP at 7nm.

eSilicon’s 7nm SerDes IP represents a new breed of performance and versatility based on a novel DSP-based architecture. Two 7nm PHYs support 56G and 112G NRZ/PAM4 operation to provide the best power efficiency tradeoffs for server, fabric and line-card applications. The clocking architecture provides extreme flexibility to support multi-link and multi-rate operations per SerDes lane.

“Today’s high-performance networking applications require the ability to balance power and density to effectively address increasing performance demands,” said Hugh Durdan, vice president of strategy and products at eSilicon. “SiFive’s E2 Core IP allows eSilicon to provide the flexibility and configurability that our customers require while achieving industry-leading power, performance, and area.”

The SiFive E2 Core IP is designed for markets that require extremely low-cost, low-power computing, but can benefit from being fully integrated within the RISC-V software ecosystem. At one-third the area and one-third the power consumption of similar competitor cores, the SiFive E2 Core series is the natural selection for companies like eSilicon that are looking to address the challenges of advanced ASIC designs.

“eSilicon has a successful track record for leveraging the most advanced technologies to develop high-bandwidth, power-efficient IP for ASIC design,” said Brad Holtzinger, vice president of sales, SiFive. “Our E2 Core Series IP takes advantage of the inherent scalability of RISC-V to bring the highest performance possible to the demands of advanced ASICs. We look forward to working with eSilicon on its next-generation SerDes to address these demands.”

About SiFive
SiFive is the leading provider of market-ready processor core IP based on the RISC-V instruction set architecture. Led by a team of industry veterans and founded by the inventors of RISC-V, SiFive helps SoC designers reduce time-to-market and realize cost savings with customized, open-architecture processor cores, and democratizes access to optimized silicon by enabling system designers to build customized RISC-V based semiconductors. SiFive is located in Silicon Valley and has venture backing from Sutter Hill Ventures, Spark Capital, Osage University Partners and Chengwei Capital, along with strategic partners Huami, SK Telecom, Western Digital and Intel Capital. For more information, visit www.sifive.com.

About eSilicon
eSilicon is an independent provider of complex FinFET-class ASICs, market-specific IP platforms and advanced 2.5D packaging solutions. Our ASIC+IP synergies include complete 2.5D/HBM2 and TCAM platforms for FinFET technology at 16/14/7nm as well as SerDes, specialized memory compilers and I/O libraries. Supported by patented knowledge base and optimization technology, eSilicon delivers a transparent, collaborative, flexible customer experience to serve the high-bandwidth networking, high-performance computing, artificial intelligence (AI) and 5G infrastructure markets. www.esilicon.com


Desperation Drives Inspiration

Desperation Drives Inspiration
by Daniel Nenni on 08-10-2018 at 7:00 am

This is the tenth in the series of “20 Questions with Wally Rhines”

1978 was a bad year for TI. In April, Intel announced the 8086 followed by disclosures of 16-bit microprocessors from Motorola, the 68000, and Zilog, the Z8000. TI had tried to leapfrog the microprocessor business by introducing the TMS 9900 16-bit microprocessor in 1976. But the TMS 9900 had only 16 bits of logical address space and the industry needed a 16-bit microprocessor for address space rather than performance. In addition, TI had no peripheral chips for the TMS 9900 and tried to overcome that weakness with an 8-bit bus version of the 9900 called the 9980 (an approach that Intel also followed with the Intel 8088) but TI found that any performance advantages of a 16-bit microprocessor were sacrificed with the 8-bit approach (https://spectrum.ieee.org/tech-history/heroic-failures/the-inside-story-of-texas-instruments-biggest-blunder-the-tms9900-microprocessor). Intel overcame that weakness by winning the design socket for the IBM PC with the 8088 despite the performance weakness.

TI also tried to develop a 16-bit TMS 9940 microcontroller with a whole new set of problems resulting in resignation or termination of much of the microprocessor team. I became manager of the TI microprocessor activity more because nobody wanted the job than through personal merit. But I had a different motivation. At the time, I was engineering manager of TI’s Consumer Products Group, heading the design of calculator chips, Speak ‘n Spell speech processors and other miscellaneous devices. That job was located in Lubbock, Texas, which was not my idea of a great location for a 31 year old single male. So Houston, which had some drawbacks, scored far above Lubbock in my plan. Most of my time in Houston was initially filled by exit interviews with all the people who were bailing out of the sinking ship. Fortunately, there were some resilient, smart people like Kevin McDonough, John Hughes, Jeff Bellay and Jerry Rogers (who later founded Cyrix and married Jodi Shelton, Founder and CEO of GSA). John Hughes convened a day-long meeting to debate what would be important after host microprocessors since we had obviously lost that race.

The answer: Special purpose microprocessors. We chose three and then added a fourth one later, and named them the TMS 320, 340, 360 and 380. The TMS 320 was a communications processor, the 340 graphics, the 360 mass storage. Later the TMS 380 was designed for the IBM Token Ring LAN. The first job was to decide what a communications processor, or Signal Processing Microcomputer, as we called it, would be. Ed Caudel spent the next six months analyzing that question and concluded that the distinguishing characteristic was a single-cycle multiply/accumulate instruction (although we required two cycles in the first generation TMS 32010 but made it to one cycle with the 32020). John commissioned Kevin and others from systems groups around the company to write applications using alternative instruction sets. Early on, we found we needed a DSP expert and, fortuitously, our group in Bedford, England had interviewed one named Surendar Magar. Tony Leigh has documented most of the history very accurately. Surrendar quickly determined that the single cycle multiply/accumulate would have to be done in hardware, not software as Ed had hoped. (http://www.tihaa.org/historian/TMS32010-12.pdf(www.tihaa.org/historian/TMS32010-12.pdf).

TI was not the first company to develop a single chip digital signal processor. In fact, it was the fifth. Intel announced one while we were developing the TMS 320 but it incorporated an on-chip 8-bit A/D and D/A making it unusable for most applications. Chi-Foon Chan, Co-CEO of Synopsys, who was working at Intel on the first DSPs, tells me that the poor customer reception of the 2920 caused Intel to kill the enhanced version of the chip, which he was working on, thus keeping the door open for TI.

Despite lots of delays, the TMS 320 was announced at the February 1982 ISSCC with rave revues from people like Ben Rosen, the leading semiconductor analyst. We knew we had a winner but the world didn’t understand digital signal processing. We had to publish books, develop algorithm libraries and promote the technology. Financial analysts paid no attention and neither did our senior management so I found myself giving largely unappreciated presentations at financial and technical meetings as well as in the TI Board room.

We needed some high volume applications and our largest customer was Lear Sigler who was making analog repeaters for under water cables. Hardly a high volume application. We needed consumer products companies in Asia. But our Japanese organization was totally uninterested. Their customers almost always wanted custom chip designs. And then a unique event changed the tide. A group in Canada wrote an application note on how to design a FAX MODEM using a TMS 32010. A group in Australia read the article and built a prototype and sold the design to a Japanese company, Murata.

A Murata engineering manager called the TI Japan office and asked for a quote on the TMS 32010. The TI Product Marketing Engineer had never heard of the TMS 320 but he looked it up in the price book and quoted a $35 price. We had never sold one north of $10 so this was a unique response. The Murata engineer said, “Good. I’ll take 20,000 parts.” From then on, we had no resistance from the TI Japan organization and, in fact, they then designed a derivative named the TMS 320C25 which became one of the highest volume members of the family.

The most strategic discontinuity came later. After years of struggle, we convinced Ericsson to design a TMS 320 into a cell phone. A subsequent need for a cost reduced version of the phone became apparent. We had to combine two ASICs, a TMS 320 DSP and a static RAM into a single chip. “How hard can this be?”, I said. All the parts are already verified. I didn’t understand the laws of verification that drive the need to verify internal state, increasing the amount of verification as the square of the number of gates when you combine chips. I willingly committed to Lars Ramquist, the CEO of Ericsson, that we would do the design quickly. A crash effort resulted and, in parallel, Gilles Delfassy took on a similar task for Nokia.

Fortunately, the chips worked and TI grew the wireless baseband MODEM business to something approaching $4 billion per year. The subsequent step was even more critical. To do similar low cost designs for all the other producers of cell phones (and other applications like hard disc drive controllers), we needed to combine our ASIC library with our embedded DSP. Everyone told me that this would be suicide. ASIC’s were sold as cents per gate while DSP’s had high gross margins. But Krishna Balasubramanian (known as Bala) and I decided to combine the ASIC and microprocessor business into one group under Rich Templeton. A good decision. Success followed, DSP-based technology became nearly half of TI’s revenue and Rich eventually became Chairman and CEO. In between, Tom Engibous leveraged the technology to create a wide variety of businesses while building TI’s position in analog. In 2017, TI became the most profitable of the major semiconductor companies in the world at 41% operating profit.

The 20 Questions with Wally Rhines Series