CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Ford Motors Discusses Future Mobility Trends at Synopsys Seminar

Ford Motors Discusses Future Mobility Trends at Synopsys Seminar
by Tom Simon on 11-17-2016 at 4:00 pm

Five or ten years ago it would have been hard to imagine someone from Ford Motors giving the keynote at a technology summit at a major EDA company like Synopsys. However, on November 2[SUP]nd[/SUP], Synopsys hosted a seminar on the topic of Automotive Architecture Design and System Testing, and Ford Technical Fellow Jim Buczkowski delivered the keynote. The other somewhat ironic side of this is that I did not drive my car to the event, preferring instead to ride light rail. Yes, traffic is that bad. We are all waiting for our autonomous cars.

What Jim had to talk about was pretty interesting. Actually the entire daylong event was captivating. Starting off with Ford first, he talked about how the Ford office in Palo Alto helps to create a culture of innovation that drives development in Dearborn. The areas he identified as relevant to the day’s topic were ADAS, powertrain, chassis and safety. Ford has created an internal initiative called One Ford to focus on their future success. The thrust of this initiative may surprise you.

Ford see four big changes in society as things that define how they need to respond as a business. The first one is urbanization. Today there are 28 mega cities with populations of over 10 million people. By 2030 it is projected that there will be 41 mega cities. These cities suffer from gridlock, which will only grow worse. It is estimated that in Paris 20% of the cars driving on the streets are looking for parking. The number of cars on the roads will overwhelm the infrastructure to support them. What’s more is that 22% of greenhouse gases come from transport, with 75% of that due to cars.

The second trend Ford sees is the growth of the global middle class. Of course the age old goal of the middle class is to own a car and their own home. We can update that goal now to include a smartphone. The phone may actually come first on the list. As we already know the phone will play a major role in transportation services.

The third big change almost goes without saying – air quality. This is an issue of increasing concern around the world. The last change is shifting consumer attitudes. Ford sees the importance of fitting into these new attitudes – and this goes beyond just providing transportation vehicles.

Generations of Americans and others around the world have seen the car as a symbol of much sought after freedom. However, due to the trends cited above, Ford now sees freedom as manifesting in the broader moniker of mobility. This is where silicon comes in – it is the enabler for giving people mobility. It is used in all forms of mobility and it is used in the information systems that will improve its efficiency and access.

So what is Ford doing to implement this strategy? If mobility is not just cars, then what is it? I was surprised to see that Ford has invested in a dynamic shuttle service called Chariot in San Francisco. It happens that Chariot was using Ford vehicles, but the main point is that Ford sees this kind of business as key to fulfilling the mobility initiative. Add to this the surprising investments in Ford GoBike and GoDrive.

Jim was quite frank during in keynote in stating that buying a car is a compromise. There are times when you want to carry cargo, and other times when you just want to transport yourself. Sometimes you want to go on the highway, or in snow, other times you just need to go 2 miles from your house to shop or to go to work. Ford sees the solution to this dilemma in the form of car sharing, fractional ownership or pay as you go. If you think about it, cars are extremely underutilized. They spend most of their lives idle in your driveway or in a parking spot.

OK, so let’s talk about where electronics comes into the picture. Jim pointed out that the design activity in cars has moved from pure mechanical and electromechanical to an era where electronics and electronic controls are enabling nearly every important vehicle system. However automotive products are situated in a very interesting position between consumer products and things like aircraft.

With a phone, it might be tolerable if some part is not working properly, but with cars – just as with airplanes – it needs to “just work.” Consumer products have high volumes and can defray development costs over large numbers of units. Airplanes are low volume and very expensive. Cars must thread the needle to find an acceptable balance with their moderate volumes, and pricing that works for automobile buyers. More so than phones, autos have regulatory requirements, but not as severe as those for aircraft.

The end goal for cars is autonomous vehicles. We are seeing cars with Level 1 and Level 2 automation. Level 1 is driver assistance, and level 2 is partial automation – like the Teslas being produced today. Though, there is some gray area regarding whether the current Tesla autopilot is Level 2 or 3. Ford feels strongly that any system that requires the driver to hand off and regain control can create ambiguity and therefore is more dangerous. Ford is skipping Level 3 and committed to having Level 4 by 2021.

Connectivity is the watchword for Ford. The car becomes another device connected to the internet. For their navigation system they will use map data as a primary data source and then overlay camera and sensor data to ascertain actual driving conditions. What this says is that they will not rely on sensor data as primary input for route decisions. Ford feels that there is still a lot of work to be done to ensure the high reliability internet connections for autos, especially at highway speeds.

They will be taking a cue from Silicon Valley in leveraging existing infrastructure as much as possible. Automobile data systems need to be secure, they want seamless cloud integration and they are looking for closer than ever cooperation with their technology partners. They see newly developed and existing standards, like ISO26262, as critical components.

The rest of the day was a deep dive into technologies that are enabling development of automotive IC’s and systems in the areas of engine control, autonomous vehicles, safety and infotainment.

One of the big takeaways was that it is now possible to perform virtual testing of all the above systems. No longer is it necessary to hook up real hardware to test system functionality and performance. Synopsys provides tools that can be used for virtual prototyping of automotive systems. This can shorten overall development time, especially when there is hardware and software co-design. The software development can occur much sooner in the process. Also during debug, the virtual prototype can assist by providing full transparency into system state to accelerate the process of resolving issues.

Car companies are broadening their nets – witness Ford’s newer emphasis on Mobility – which will necessitate reliance on a broad selection of electronics, from power devices to advanced GPU’s. These will be integrated into complex systems to deliver what once would have been considered science fiction levels of services. The inevitable result is that car companies, their suppliers, and even the suppliers to their suppliers all need to embrace new and challenging technologies. The next 5 years will deliver some pretty amazing stuff.

For more information on the Synopsys portfolio of automotive-specific IC design tools, IP and software development tools, like those being used by the Seminar presenters, please look at the Synopsys website.

Read more SemiWiki automotive blogs here……..

Read more articles by Tom Simon here…..


IC Design Management: Build or Buy?

IC Design Management: Build or Buy?
by Daniel Payne on 11-17-2016 at 12:00 pm

When I first started doing circuit design with Intel at the transistor level back in the late 1970’s we had exactly two EDA tools at our disposal: an internally developed SPICE circuit simulator, and a commercial IC layout system. Over the years at Intel the internal CAD group added many more automation tools: gate level simulator, cycle based simulator, DRC, LVS, PLA generator, schematic capture, IC layout. The point is that many IC and SoC companies have internal CAD groups that are tasked with creating tools to make the design and management of IP easier for the design groups. From a management perspective someone has to be asking the question, “Should we develop this automation ourselves, or just use something off the shelf that is commercially supported?”

Focusing on the area of IC design management (DM) our semiconductor industry has often coded their own version control systems that made a lot of sense at the time the need was identified. A common architecture to start with for data management uses a single server per project as shown below:

There are some limitations when using a server per project for design management, like:

  • Difficult to share or re-use semiconductor IP across projects
  • Little scaling
  • Limited performance

A more modern approach to DM tools is to re-use existing version control software, have a centralized architecture, and scale across an entire organization. Here’s a picture of this architecture:

Some of the immediate benefits of this centralized approach is how easy it is to share IP and updates across the entire company.

OK, so the modern approach looks better than the server per project idea, so which commercial DM tool should I even consider? Well, first consider selecting a vendor that gives you a choice in file versioning system instead of locking you into a proprietary file versioning system. The idea is that you can choose a commercial file versioning system that has the best scalability and reliability to handle your biggest SoC designs easily. Proprietary version control systems don’t scale well to support the giga-size volumes that modern SoCs demand.

Being able to easily share all of the semiconductor IP within your company to all projects is a big plus with the centralized server architecture, because there are no more silos of data to stitch together. With a Platform Based Design methodology each of your project teams can get quick access to the most updated version of IP and support files, then get alerts when there’s been any updates. With a Single Source of Truth your company is going to spend less time on IT and support costs.

Here’s a summary of what you should be looking for in a modern, IP management system:

[LIST=1]

  • Scalability, reliability, supporting 10,000+ users and projects at once
  • Quick access to remote users
  • Minimize network traffic and disk space
  • Track all IP, release management, workspace contents, versioning, bug tracking
  • Identify each IP developed, capabilities, and build workspaces per IP specifications
  • Report the quality status of IP, bugs, versions, retirement status
  • Usable within design tools, command line, or browser
  • Support traceable export control

    That’s quite the list of IP management requirements and one EDA vendor that meets this list is Methodics. Engineers at Methodics have created ProjectIC that enables IP centric Platform Based Design using the concept of Single Source of Truth by handling all of your IC project:

    • Design Files
    • Permissions
    • Hierarchy
    • IP Versions
    • Bug Tracking
    • Labels and Custom Fields
    • Release Management
    • Hooks
    • IP Usage Tracking
    • Workspace Tracking

    The technology to enable file sharing while reducing the size and network bandwidth by up to 90% is called Warpstor, and it’s going to come in handy when your SoC workspace exceed 100GB. Best of all Warpstor is invisible to design engineers.

    One of the best file versioning systems around is Perforce Helix because it has server technology that supports tens of thousands of users.

    Cadence IC design users will be right at home with Methodics because the VersIC design tool integrates ProjectIC and Perforce into their familiar user interface.

    Summary
    Now that you know a bit more about DM from Methodics with their Single Source of Truth, you get to compare that versus any internal or proprietary system in use now. Many design groups opt for a commercial tool because of the features, performance, reliability and integration. Read the full White Paperon this topic.

    Related blog – Requirements Management and IP Management Working Together

    Related blog – 5 Reasons Why Platform Based Design Can Help Your Next SoC


  • Ada in the IoT?

    Ada in the IoT?
    by Bernard Murphy on 11-17-2016 at 7:00 am

    For the great majority (I assume) of my audience, if you think about Ada at all, you probably think about military and aerospace applications. Using Ada in the IoT might seem like overkill – cumbersome, over-powered and entirely unnecessary. Or so I thought until I talked to Quentin Ochem of Adacore at ARM TechCon.

    For those of you unfamiliar with Ada, I’ll start with a quick summary. The language was developed under a US Department of Defense contract with the objective of ensuring intrinsically high quality by construction. This would be achieved, to the greatest extent possible, be ensuring errors would be found in the compiler rather than at run-time. The language was named after Ada, Countess Lovelace, the only legitimate child of Lord Byron and famous for developing the first algorithm to be run on a machine. Closer to home, VHDL was founded in large part on Ada syntax, thanks to another US DoD contract which wanted to maximize overlap of this hardware language with the software language.

    As you might expect, Ada products from Adacore have been used in spacecraft, aircraft and military programs. But they have also been selected for hospital IS management, financial systems, grid management, railway control and air traffic management systems. They are also being used in a Toyota research program in Japan, together with Adacore’s Spark formal verification software, to develop a vehicle (car, truck, etc) component implementation that can be proven to be free of run-time errors. These are all programs that come closer to our day-to-day lives, yet all require very high assurances of safety, and increasingly security. And since many of these applications are either mobile or remote, they clearly impact IoT implementations, whether at the edge or in the cloud.

    Of course one concern might be that Ada run-time libraries would be too heavy to be used at the edge. But Quentin told me they have already ported to an AVR 8-bit controller with 256KB memory, so that shouldn’t be a concern. Another might be lack of a pool of trained software engineers. Quentin said this is more an issue of commitment than difficulty. In their experience, good C/C++ developers can get up and running with Ada in a week. Presumably it takes longer for them to become fully proficient, though perhaps no more so than in switching from C++ to Python. (And yes, Ada now supports object-oriented programming if you were wondering.) Another concern would be interoperability with legacy software (who can afford to rewrite everything in Ada?). This apparently isn’t a problem – bindings are provided to interface with C, C++, Java and Python, among other languages. You only have to consider Ada for the bits you feel are safety-critical.

    One very interesting tool in the Adacore product lineup is Spark, a formal prover for Ada code. Formal proving started for software but has been commercially much less successful that its cousin in hardware-proving, perhaps in part because of the loose structure of common programming languages. As a tightly constrained language Ada software-proving is apparently more tractable (though presumably you still need to bound the scope of code in which you are aiming to prove properties). This should further enhance the appeal of Adacore in safety-critical applications. By way of example there is an interesting blog on rewriting part of the control software for a drone in Ada, to make the device less prone to crashes. Safety-proving in this project was accomplished using Spark.

    As we move (asymptotically) closer to IoT hardware aiming for safety and security by construction, questions about why we can’t do the same for the software running on that hardware are likely to become more urgent. Perhaps Ada’s day in the commercial sun is dawning. You can learn more about Ada and Adacore HERE.

    More articles by Bernard…


    FPGAs allow customization of SEU mitigation

    FPGAs allow customization of SEU mitigation
    by Don Dingee on 11-16-2016 at 4:00 pm

    Teams working on avionics, space-based electronics, weapons delivery systems, nuclear generating plants, medical imaging equipment, and other applications where radiation leads to single-event upsets (SEU) are already sensitive to functional safety requirements. What about automotive applications?

    With electronic content in cars booming, complexity rising to support advanced algorithms, and semiconductor geometries shrinking, the potential for SEU errors is growing. The idea that SEUs don’t apply to terrestrial applications is completely outdated – almost any application using the latest chip technology needs a mitigation strategy. SEUs are also a function of how much atmosphere is between the chip and the sun. A chip seated in the Purple Row of Coors Field at just 5280 feet is four times more likely to experience an SEU than the same device at sea level. That same chip driven into nearby Rocky Mountain National Park with road surfaces over 11,000 feet becomes eight times more susceptible.


    To solve several problems, many automotive designers are turning to FPGAs. One motivation is ISO 26262 and requirements traceability. Rather than relying on a merchant ASIC with indeterminate steps implemented, an FPGA can be completely customized to support both functional requirements and ISO 26262 requirements.

    FPGAs also give designers another customization capability: tuning redundancy and mitigation techniques to handle the possibility of SEU errors. Mitigation steps are important because an error can propagate through an entire chip, and indeed throughout the entire car, very quickly. Detecting and correcting errors close to the source is key to maintaining safe operation.

    The question is not if you’re going to get SEU errors – the question is, what will you do about it?

    One approach would be just to implement everything in triplicate, and use voting to resolve outcomes. The likelihood that SEUs hit two legs of a triplicated circuit at the same exact time is slim. However, just tripling the real estate for an FPGA design may drive an implementation that fits over the edge. A more realistic approach uses redundancy more efficiently. This also factors in when considering ISO 26262 and realizing different subsystems have different levels of functional safety requirements.

    Different FPGA constructs also have different redundancy needs. Finite state machines (FSMs) may use Hamming-3 codes, and safe FSMs may dictate forced resets upon error detection, or a specific error recovery scheme. Triple redundancy can come in three flavors: local TMR, where registers are triplicated and fed to a voter; distributed TMR, where TMR blocks are separated on the chip to further reduce chances of SEUs; and block-level TMR, popular where black-box IP is deployed. Memory and I/O can also be triplicated, and other techniques such as inferencing ECC and creating error flags can protect memory.

    Synopsys has spent huge amounts of effort on Synplify Premier to automate high-reliability techniques for FPGA synthesis. Automotive designs can just tap in to that experience, adding functional safety steps in weeks less time compared to manual implementations.

    Joe Mallett, formerly of Xilinx and now senior product marketing manager at Synopsys, has written his thoughts in the Synopsys Insight newsletter:

    http://www.synopsys.com/Company/Publications/SynopsysInsight/Documents/snps-insight-issue1-2016.pdf

    This newsletter contains other articles on ISO 26262 and automotive-certified IP which should be of interest to automotive teams.

    Where mitigating SEUs used to be a gory, manual process, Synopsys is making the road to functional safety in FPGA designs much easier with Synplify Premier.


    Can Huawei Shift From Carrier Leader To Global Cloud Player?

    Can Huawei Shift From Carrier Leader To Global Cloud Player?
    by Patrick Moorhead on 11-16-2016 at 12:00 pm

    AAEAAQAAAAAAAAfQAAAAJDdhMTBhYjc4LTA2NzgtNDg4NC1iMGZkLTA2NDcwMWVjY2U4NA

    Huawei Technologies is a large, $60B China-based company that, while many in the U.S. may not be familiar with, is a very big name in the carrier and telco equipment and consumer smartphone space—especially in China and EMEA (Europe, Middle East, Africa). The company is making serious moves to expand their reach into the carrier and enterprise cloud and take on the role of a “global ICT leader” and believes that the “C”, “communications”, in “IT” will make the difference.

    I attended Huawei Connect 2016 in Shanghai a few weeks ago along with approximately 20,000 Huawei ecosystem partners, customers, press and analysts. This was the company’s first integrated conference—combining the three separate Cloud, Network, and Developer’s Congresses Huawei has traditionally held. The theme of the conference was appropriately titled “Shape the Cloud,” and it was their first big opportunity on the public stage to demonstrate the company’s new global cloud trajectory.

    This isn’t a research paper, research brief and I am only doing an overview from their CEOs keynote address and subsequent meetings, but I may follow up with those details if there is interest. Also, I will be focusing on carrier and enterprise, not their consumer business.

    A Chinese Carrier Powerhouse
    Before we dive into Huawei Connect, I wanted to provide some background on Huawei for those unacquainted with the company. First off, they are a very large company in revenue, racking up $60B in 2015 and their recent 2016 financials are putting them on a much bigger track as the first half they saw 40% growth. Huawei was founded in 1987, and are privately-owned by 85,000 Chinese employees. The other 90,000 non-Chinese employees, though they cannot own it, are provided tracking shares in the company so they can enjoy in the upside. While they aren’t public, they do issue audited topline financials every six months.

    Over half of their $60B 2015 revenue comes from China (42%), and the rest from EMEA (32%), APAC (13%) and the Americas (10%). They’ve yet to make serious inroads in the Americas, but we could be seeing more growth in that direction. Huawei takes an interesting, non-committal stance on the U.S. It’s kind of a “we don’t need to be successful here but it would be nice.”

    $36 billion (60%) of Huawei’s 2015 revenue came from the telco and carrier market. This has historically been their bread and butter—they claim 45 of the top 50 telcos under their umbrella, excluding notable exceptions such as AT&T, Verizon, and Sprint. The rest of their business is comprised of 33% consumer ($20 billion), and 7% enterprise (a small, but quickly growing $4 billion).

    In the carrier and telco space, Huawei competes with Ericsson and Nokia, both who are having their challenges. Ericsson’s CEO was pushed out by the board this July and Nokia’s Networks business was down 11% YoY for Q2. Huawei holds the #1 smartphone unit share in China as Lenovo and Xiaomi declined and are #3 globally to Samsung and Apple. The most impressive thing about Huawei’s smartphone ascension is that they aren’t just doing cheap, they are increasing share in the midrange and premium smartphone space.

    Another interesting tidbit is that Huawei employs an innovative rotating CEO system, wherein three senior executives take six-month turns as acting CEO of the company. That hasn’t worked well at any other company before I’m aware of, but seems to be working well so far at Huawei.

    Huawei’s Cloud Vision

    Current rotating CEO Ken Hu delivered the Day 1 Connect 16 Keynote—the first half of which focused on the usual meta-concepts of digital transformation, IoT, the cloud, and preparing for what Hu referred to as the “intelligent world.” He differentiated between what he called the current Cloud 1.0 era- based on “agile innovation, good user experience, and low costs,” and impending Cloud 2.0 era, “in which enterprises are the main players, and we will see the rise of countless industry clouds.” Hu went as far as to predict that by 2025, more than 85% of enterprise applications will be cloud-based.

    It’s that new era in which Huawei is trying to position themselves as the “Enabler and Driver of the Intelligent World.” The second half of the keynote outlined Huawei’s overall strategy: staying customer-centric, providing innovative cloud technology, becoming their customers’ preferred partner, and proactively contributing to the growing cloud ecosystem.

    I have attended many big tent events and there wasn’t much here I hadn’t heard in other keynotes the past few years. Huawei did introduce the “industry cloud”, but this is a new word, not a new concept as it is basically it is a vertical approach to private clouds. Clouds are vertical now and vary by workloads, latency, responsiveness, security, regulation and scalability. I like it, it’s not new, and it underlines Huawei’s vertical approach which I saw everywhere at the show.

    Customer centricity through customization and “open”
    Hu touted the company’s 28 years of customer-centricity, saying it is part of Huawei’s DNA. Every company says they are customer-centric and in the west most IT companies have stopped using the term because customers are skeptical. I do believe Huawei when they say this as they appear to do so many customizations for their customers.

    Hu went on to say that as technology providers, a one-size-fits-all approach isn’t always the right solution—Huawei pledged to learn from customers and develop innovative cloud solutions that are right for their specific needs. Hu cited Huawei’s development of open cloud architecture as an example of meeting their large enterprise customer’s desire for independence and interoperability, and emphasized the company’s commitment to “openness, security, and enterprise grade performance” in all of their cloud solutions.

    I was a bit skeptical at first about the “most open” approach as everyone says they’re “open”, the Linux Foundation executive director Jim Zemlin literally got on stage day two and said Huawei leads in “open”. Not “a leader”, “one of the leaders”, but “THE leader”. That blew me away.

    Strategic partner to drive beyond the “dumb pipe”
    The next key part of Huawei’s strategy is being more than just a vendor to their customers—being a true strategic partner. Like customer-centric, pretty much all IT providers say they are strategic. Hu highlighted Huawei’s work with Deutsche Telekom (a German telecom company) as a case study: this past June, Deutsche Telekom released their Open Telekom cloud, a set of private and public cloud services and software solutions developed for the enterprise. They partnered with Huawei to provide hardware and software solutions for the project. Harkening back to the previous point, Hu said that the most noteworthy aspect of the collaboration was that Open Telekom was “completely driven by customer needs,” and said that so far the product had been receiving widely positive reviews but no indication from DT on revenue.

    I believe Huawei is dedicated to and trying very hard to be a strategic partner to the carriers. I think Huawei can deliver the carriers what it takes to help them, but I question whether the carriers can pull it off. I have to start off with some background. Carriers aka “telcos” try their hardest not to be relinquished to the non-differentiated “dumb pipe”. You can differentiate a pipe, and carriers are globally all trying to provide value-add services to the consumer and/or the enterprise. In addition to video services, Huawei is helping carriers to fulfill their desired goal what I will call the “carrier cloud”. The carrier loud is all about providing services like Amazon AWS, Microsoft Azure, IBM SoftLayer and Google Cloud provide today and more as workloads sub-segment and advance even more in the future.

    I see a potential edge IIoT (industrial IoT) carrier play, but I am very skeptical about everything else. Carriers haven’t exactly been lighting the enterprise world on fire and aren’t investing like the “Super7” cloud giants. This is Huawei’s best play and are playing it even better than Ericsson and certainly bring a lot more to the table than Nokia.

    The biggest question for me still on Huawei’s “data play” is how they stack up to Cisco Systems, Dell EMC, and Hewlett-Packard Enterprise whose business isthe private and hybrid. Huawei has the carriers attention and does have enterprise capabilities, but the others have 25 years’ enterprise experience and are more focused than ever. Enterprise is 7% of Huawei’s current business and 100% of Cisco, HPE and most of Dell EMC’s business. Huawei will do very well in the “carrier cloud” and will grow in the enterprise for sure, but they have a whole different kind of competition in the cloud.

    Cloud ecosystem development

    The last big point Hu emphasized was Huawei’s commitment to the development of the cloud ecosystem. Instead of simply releasing “a handful of clouds on its own,” in Hu’s words, Huawei is looking to help their customers build a variety of different clouds—in turn building out the entire cloud ecosystem. Huawei also has strategic business alliances with some big names—SAP, Accenture, Microsoft, and Intel, to name a few.

    According to Hu, these alliances help promote openness, collaboration, and shared success for everyone, which in turn guarantees the ongoing development of the cloud ecosystem. He went on to stress the importance that everyone involved in the cloud ecosystem bring their unique strengths to the table, concluding the thought by saying, “We are Huawei. Our role is to make good products and serve our customers well.”

    I’ll admit, I was initially surprised to see SAP’s and Accenture’s aggressiveness, but when you drill in, where Huawei is successful, SAP and Accenture want to be more successful, and vice-versa. Also, having Intel CEO Brian Krzanich on-stage day 3 I thought was a big deal.

    Wrapping up
    Huawei Technologies is an impressive company. The company is a force with carriers and in smartphones. At Connect 16 in Shanghai, they did a good job communicating what they want to do, who they want to do it with and why they want to do it, but it was challenging to parse why they are better at what they do. I think the answer could lie in R&D and innovation. Huawei invested $9.2B into R&D in 2015, $38B over the last ten years which puts is in the top 5 of all high tech. They are also a leader in PCT (Patent Cooperation Treaty) published patent applications. Patents don’t guarantee future success but certainly is a leading indicator for innovation.

    It’s still far too soon to say whether or not Huawei is going to pull of their reinvention into the enterprise and the carrier cloud. Huawei has a long, successful track record in the telecom industry with carriers, and while that won’t necessarily automatically translate over to cloud and the enterprise, it speaks volumes to the company’s ability to be competitive, innovate, and stay on top of market trends. It’ll be interesting too to see if this new trajectory allows them to gain more ground in North America and Western Europe where they haven’t had as much success as China, APAC, Eastern Europe, Middle East and Africa. I know the company will get big carrier wins, but what I’m most looking forward to are big enterprise cloud wins.

    Read more from Patrick….


    Improving on EMACS for VHDL Creation

    Improving on EMACS for VHDL Creation
    by Bernard Murphy on 11-16-2016 at 7:00 am

    OK – I admit I titled this piece as clickbait. There is a core of designers for whom belief in the supremacy of EMACS for RTL creation comes close to religion. Some will read only the title and jump immediately to penning searing comments questioning my intelligence, experience, parenthood and ability to tie my own shoes. Some, I hope, will read on if only to more precisely craft their rebuttal. A few may actually find some ideas of interest in this piece and especially in the links at the end.

    I should admit upfront I am not an EMACS user, though I have worked with many fans. I rely instead for my information on the views of a group of folks at Sigasi who have been and remain very dedicated users but have come to realize that EMACS isn’t perfect for RTL (particularly VHDL) creation and that it is possible to conceive of better solutions without sacrificing your immortal soul. To give you a sense that these guys truly are among the EMACS faithful, one of their pieces is titled “Why Emacs VHDL mode is so Great. And Why We Want to Beat it”.

    Let’s start with an obvious point. A major premise of VHDL (springing from its foundation in ADA) is to be correct by construction, as least as far as possible, so it is intended that you spend much more time getting to a compilable piece of code that you would in other more flexible languages. As getting to a successful compile gets harder, it is natural to want to simplify the task. VHDL-mode for EMACS was created over 20 years ago to address this need and has evolved into a truly great VHDL-aware editor. It provides great macros, a design hierarchy browser and a code formatter, which includes vertical alignment. And it has code fixing such as updating sensitivity lists to avoid latches. Plus, and this is important, it’s free. You don’t have to argue with your manager to have a part of the EDA budget carved out for an editor.

    So why, the Sigasi folks themselves observe, would any EMACS fan in their right mind want to switch to a GUI-based editor for which they are going to have to pay? First, fans will respond that they see no value in a GUI. This part is as much religion as utility. EMACS (and Vi) has a GUI, it’s just less self-consciously “pretty” than modern GUIs. But the utility part is important – while you don’t want to pay for more pretty, you might pay for more utility. Utility of this type can become a requirement for ease of use; in the (production) software world, this battle was over a long time ago. No-one would work for a software company that offered only basic text editors for development.

    That meaningful added utility is possible should not be surprising. Macro functions in any general-purpose text editor are necessarily limited to whatever can be accomplished with a shallow understanding of the text, since that is all they can build using regular expression matching. Within those limits they can still do some pretty useful things, but they can never be as capable as a special-purpose editor tuned to a deep understanding of a language, or if they try to do so, macros become unusably slow. For example, VHDL configurations can have significant impact on understanding chip hierarchy, but EMACS VHDL-mode does not look at configurations, presumably because this would be very slow.

    Sigasi have summarized their view of differences particularly between EMACS capabilities and those in their Sigasi Pro editor. You’ll see that they give high marks to EMACS, but they note several significant areas where that approach falls short. These aren’t “pretty” GUI features. They are functional features that, if you like what VHDL-mode does for you, you might also reasonably want to have but you may never see in EMACS macro packages.

    [TABLE] border=”1″
    |-
    | style=”width: 147px” |
    | style=”width: 156px” | Other editors
    | colspan=”2″ style=”width: 165px” | EMACS VHDL mode
    | style=”width: 156px” | Sigasi Pro
    |-
    | style=”width: 147px” | Commercially supported
    | style=”width: 156px” | ?
    | colspan=”2″ style=”width: 165px” |
    | style=”width: 156px” |
    |-
    | style=”width: 147px” | Syntax highlighting
    | style=”width: 156px” | ✓
    | colspan=”2″ style=”width: 165px” | ✓
    | style=”width: 156px” | ✓
    |-
    | style=”width: 147px” | Semantic highlighting
    | style=”width: 156px” | ✘
    | colspan=”2″ style=”width: 165px” | ✘
    | style=”width: 156px” | ✓
    |-
    | style=”width: 147px” | Word based autocomplete
    | style=”width: 156px” | ✓
    | colspan=”2″ style=”width: 165px” | ✓
    | style=”width: 156px” | ✓
    |-
    | style=”width: 147px” | Language templates
    | style=”width: 156px” | ?
    | colspan=”2″ style=”width: 165px” | ✓
    | style=”width: 156px” | ✓
    |-
    | style=”width: 147px” | Context sensitive template
    | style=”width: 156px” | ✘
    | colspan=”2″ style=”width: 165px” | ✘
    | style=”width: 156px” | ✓
    |-
    | style=”width: 147px” | Instant error reporting
    | style=”width: 156px” | ✘
    | colspan=”2″ style=”width: 165px” | ✘
    | style=”width: 156px” | ✓
    |-
    | style=”width: 147px” | Configurable key bindings
    | style=”width: 156px” | ?
    | colspan=”2″ style=”width: 165px” | ✓
    | style=”width: 156px” | ✓
    |-
    | style=”width: 147px” | Extensible
    | style=”width: 156px” | ?
    | colspan=”2″ style=”width: 165px” | ✓ in LISP
    | style=”width: 156px” | ✓ in Java
    |-
    | style=”width: 147px” | VHDL code formatting
    | style=”width: 156px” | ✘
    | colspan=”2″ style=”width: 165px” | ✓
    | style=”width: 156px” | ✓
    |-
    | style=”width: 147px” | Navigation; search
    | style=”width: 156px” | ✘
    | colspan=”2″ style=”width: 165px” | broken; based on CTAGS
    | style=”width: 156px” | ✓
    |-
    | style=”width: 147px” | Rename refactoring
    | style=”width: 156px” | ✘
    | colspan=”2″ style=”width: 165px” | ✘
    | style=”width: 156px” | ✓
    |-
    | style=”width: 147px” | Hover to see declaration
    | style=”width: 156px” | ✘
    | colspan=”2″ style=”width: 165px” | ✘
    | style=”width: 156px” | ✓
    |-
    | style=”width: 147px” | Chip hierarchy
    | style=”width: 156px” | ✘
    | colspan=”2″ style=”width: 165px” | limited; no configurations
    | style=”width: 156px” | ✓
    |-
    | style=”width: 147px” | Generate Makefile
    | style=”width: 156px” | ✘
    | colspan=”2″ style=”width: 165px” | limited; only one library
    | style=”width: 156px” | ✓ multiple libraries
    |-
    | style=”width: 147px” | Component instantiation
    | style=”width: 156px” | ✘
    | colspan=”2″ style=”width: 165px” | ✓ based on port translation
    | style=”width: 156px” | ✓ based on autocomplete
    |-
    | style=”width: 147px” | Inspect values of constants and Generics
    | style=”width: 156px” | ✘
    | colspan=”2″ style=”width: 165px” | ✘
    | style=”width: 156px” | ✓
    |-

    Finally, Sigasi acknowledge that EMACS diehards will not be won over easily. By raw metrics, EMACS will be faster, run in a smaller footprint, be user-customizable and easily run on a remote terminal. But are raw metrics the right metrics? Should you be measuring how quickly you can do an edit or how quickly you can get to a functioning simulation? And on remote operation – really? We should all know how to run remote X-terms these days. Preferences in an editor will always be somewhat religious, but it does seem that carrying those same preferences to their logical limit inevitably leads to editors like Sigasi Pro. You can learn more about Sigasi’s attempt to convert the EMACS faithful HERE.

    Let the apoplectic comments begin…

    More articles by Bernard…


    5 of the Top 20 Semiconductor Suppliers to Show Double-Digit Gains in 2016!

    5 of the Top 20 Semiconductor Suppliers to Show Double-Digit Gains in 2016!
    by Daniel Nenni on 11-15-2016 at 4:00 pm

    Semiconductor Market Researcher IC Insights released an update to the 2016 semiconductor sales forecast which is interesting on many different levels. It really has been an exciting year for the semiconductor industry, absolutely. Two of the stars of this year’s report happen to be two of my favorite fabless companies, Nvidia and MediaTek, who will post record gains of 35% and 29% respectively.

    The fastest growing top-20 company this year is forecast to be U.S.-based Nvidia, which is expected to post a huge 35% year-over-year increase in sales. The company is riding a surge of demand for its graphics processor devices (GPUs) and Tegra processors with its year-over-year sales in its latest quarter (ended October 30, 2016) up 63% for gaming, 193% for data center, and 61% for automotive applications.

    The second-fastest growing top-20 company in 2016 is expected to be Taiwan-based MediaTek, which is forecast to post a strong 29% increase in sales this year. Although worldwide smartphone unit volume sales are expected to increase by only 4% this year, MediaTek’s application processor shipments to the fast-growing China-based smartphone suppliers (e.g., Oppo and Vivo), are forecast to help drive its stellar 2016 increase.

    Nvidia and MediaTek serve different markets but they have two important things in common: VERY strong leadership and a VERY strong foundry partnership with TSMC.

    Nvidia CEO Jen-Hsun Huang and MediaTek CEO Tsai Ming-Kai could not be two more different CEOs. Jen-Hsun is loud and arrogant while Tsai is quiet and humble. The traits they do share are vision and a laser like focus on market opportunity. They both also have Dr. Morris Chang Exemplary Leadership Awards from the Global Semiconductor Association in recognition of “exceptional contributions to driving the development, innovation, growth and long-term opportunities of the fabless semiconductor industry”.

    On the foundry side, Nvidia and TSMC are collaborating on a cost effective HPC specific 7nm process to be introduced in Q4 2017. MediaTek is also collaborating with TSMC on an SoC version of 7nm but more importantly will be TSMC’s top customer for the 16FFC fab in China due to go online in 2018. Right now the China SoC market is 28nm centric but that will change in 2018. Mediatek will also use TSMC InFO technology for added cost reduction and performance improvement distancing themselves from the other 14nm SoC offerings for China and other emerging SoC markets (India).

    The interesting thing to note about Nvidia’s Q3 2016 results is the surge in data center and automotive business (which is what I mean by vision and market opportunity focus). Given their success in two of the hottest semiconductor market segments, one might predict that Nvidia is a prime acquisition target. In fact, I had wrongly predicted that QCOM would buy Nvidia instead of NXP and I still think they should have but I digress…

    My feeling today is that Nvidia should be the one doing the acquiring so I’m working on a shopping list for Jen-Hsun. Please post your suggestions in the comment section and I will make sure he gets them.

    You can get a PDF version of the IC Insights research bulletin HERE.


    IoT Standardization and Implementation Challenges

    IoT Standardization and Implementation Challenges
    by Ahmed Banafa on 11-15-2016 at 12:00 pm

    AAEAAQAAAAAAAAc9AAAAJGE3NDhiYTdlLWZkMTctNDhiMy1hZGJhLTQ3MGI5ZjExYjc3YQ

    The rapid evolution of the #IoT market has caused an explosion in the number and variety of IoT solutions. Additionally, large amounts of funding are being deployed at IoT startups. Consequently, the focus of the industry has been on manufacturing and producing the right types of hardware to enable those solutions. In current model, most IoT solution providers have been building all components of the stack, from the hardware devices to the relevant cloud services or as they would like to name it as “IoT solutions”, as a result, there is a lack of consistency and standards across the cloud services used by the different IoT solutions.

    As the industry evolves, the need for a standard model to perform common IoT backend tasks, such as processing, storage, and firmware updates, is becoming more relevant. In that new model, we are likely to see different IoT solutions work with common backend services, which will guarantee levels of interoperability, portability and manageability that are almost impossible to achieve with the current generation of IoT solutions.

    Creating that model will never be an easy task by any level of imagination, there are hurdles and challenges facing the standardization and implementation of IoT solutions and that model needs to overcome all of them.
    IoT standardization
    The hurdles facing IoT standardization can be divided into 4 categories; Platform, Connectivity, Business Model and Killer Applications:

    • Platform:This part includes the form and design of the products (UI/UX), analytics tools used to deal with the massive data streaming from all products in a secure way, and scalability which means wide adoption of protocols like IPv6 in all vertical and horizontal markets is needed.
    • Connectivity:This phase includes all parts of the consumer’s day and night routine, from using wearables, smart cars, smart homes, and in the big scheme, smart cities. From the business prospective we have connectivity using IIoT (Industrial Internet of Things) where M2M communications dominating the field.
    • Business Model:The bottom line is a big motivation for starting, investing in, and operating any business, without a sound and solid business models for IoT we will have another bubble , this model must satisfied all the requirements for all kinds of e-commerce; vertical markets, horizontal markets and consumer markets. But this category is always a victim of regulatory and legal scrutiny.
    • Killer Applications: In this category there are three functions needed to have killer applications: control “things”, collect “data”, and analyze “data”. IoT needs killer applications to drive the business model using a unified platform.

    All four categories are inter-related, you need all them to make all them work. Missing one will break that model and stall the standardization process. A lot of work needed in this process, and many companies are involved in each of one of the categories, bringing them to the table to agree on a unifying model will be daunting task.

    IoT implementation
    The second part of the model is IoT implementations; implementing IoT is not an easy process by any measure for many reasons including the complex nature of the different components of the ecosystem of IoT. To understand the gravity of this process, we will explore all the five components of IoT Implementation: Sensors, Networks, Standards, Intelligent Analysis, and Intelligent Actions.

    Sensors
    There two types of sensors: active sensors & passive sensors. The driving forces for using sensors in IoT today are new trends in technology that made sensors cheaper,smarter and smaller. But the challenges facing IoT sensors are:power consumption, security, and interoperability.

    Networks

    The second component of IoT implantation is to transmit the signals collected by sensors over networks with all the different components of a typical network including routers, bridges in different topologies. Connecting the different parts of networks to the sensors can be done by different technologies including Wi-Fi, Bluetooth, Low Power Wi-Fi , Wi-Max, regular Ethernet , Long Term Evolution (LTE) and the recent promising technology of Li-Fi (using light as a medium of communication between the different parts of a typical network including sensors).

    The driving forces for wide spread network adoption in IoT are high data rate, low prices of data usage, virtualization (X – Defined Network trends), XaaS concept (SaaS, PaaS, and IaaS), and IPv6 deployment. But the challenges facing network implementation in IoT are the enormous growth in number of connected devices, availability of networks coverage, security, and power consumption.

    Standards

    The third stage in the implementation process includes the sum of all activities of handling, processing and storing the data collected from the sensors. This aggregation increases the value of data by increasing, the scale, scope, and frequency of data available for analysis but aggregation only achieved through the use of various standards depending on the IoT application in used.

    There are two types of standards relevant for the aggregation process; technology standards (including network protocols, communication protocols, and data-aggregation standards) and regulatory standards (related to security and privacy of data, among other issues). Challenges facing the adoptions of standards within IoT are:standard for handling unstructured data, security and privacy issuesin addition to regulatory standards for data markets.

    Intelligent Analysis

    The fourth stage in IoT implementation is extracting insight from data for analysis. IoT analysis is driven by cognitive technologies and the accompanying models that facilitate the use of cognitive technologies. With advances in cognitive technologies’ the ability to process varied forms of information, vision and voice have also become usable, and open the doors for in-depth understanding of the none-stop streams of real-time data. Factors driving adoption intelligent analytics within the IoT; artificial intelligence models, growth in crowdsourcing and open- source analytics software, real-time data processing and analysis. Challenges facing the adoption of analytics within IoT; Inaccurate analysis due to flaws in the data and/or model, legacy systems’ ability to analyzeunstructured data, and legacy systems’ ability to manage real- time data

    Intelligent Actions

    Intelligent actions can be expressed as #M2M (Machine to Machine) and M2H (Machine to Human) interfaces for example with all the advancement in UI and UX technologies. Factors driving adoption of intelligent actions within the IoT;lower machine prices, improved machine functionality, machines “influencing” human actions through behavioral-science rationale, and deep Learning tools. Challenges facing the adoption of intelligent actions within IoT : machines’ actions in unpredictable situations, information security and privacy, machine interoperability, mean-reverting human behaviors, and slow adoption of new technologies

    The Road Ahead

    The Internet of Things (IoT) is an ecosystem of ever-increasing complexity; it’s the next weave of innovation that will humanize every object in our life, it is the next level to automating every object in our life and convergence of technologies will make IoT implementation much easier and faster, which in turn will improve many aspects of our life at home and at work and in between. From refrigerators to parking spaces to houses, IoT is bringing more and more things into the digital fold every day, which will likely make IoT a multi-trillion dollar industry in the near future. One possible outcome of successful standardization of IoT is the implementation of “IoT as a Service” technology , if that service offered and used the same way we use other flavors of “as a service” technologies today the possibilities of applications in real life will be unlimited. But we have a long way to achieve that dream; we need to overcome many obstacles and barriers at two fronts, consumers and businesses before we can harvest the fruits of such technology.

    Article published on #IEEE-IoT : http://iot.ieee.org/newsletter/july-2016/iot-standardization-and-implementation-challenges


    References
    :
    http://www.dbta.com/BigDataQuarterly/Articles/10-Predictions-for-the-Future-of-IoT-109996.aspx
    https://campustechnology.com/articles/2016/02/25/security-tops-list-of-trends-that-will-impact-the-internet-of-things.aspx
    http://dupress.com/
    https://www.linkedin.com/pulse/iot-implementation-challenges-ahmed-banafa?trk=mp-author-card
    https://www.linkedin.com/pulse/what-next-iot-ahmed-banafa?trk=mp-author-card
    Figures Credit: https://pixabay.com/en/binary-code-man-face-board-trace-1327503/ and Ahmed Banafa

    Ahmed Banafa Named No. 1 Top VoiceTo Follow in Tech by LinkedIn in 2016


    Quality in Hard IP

    Quality in Hard IP
    by Bernard Murphy on 11-15-2016 at 7:00 am

    I was CTO at Atrenta, home of SpyGlass, for many years before the company was acquired by Synopsys, so I know a thing or two about IP quality, to paraphrase a popular commercial. The problem is that even in the best-run IP shops, errors happen. Sometimes they happen on simple changes, especially when you think “This IP has been very carefully checked and the change I just made is so small it won’t affect anything”. They also happen in configurable IP, especially near architecture transitions where not all possible configurations have been comprehensively validated.

    At Atrenta we worried about soft IP and were sometimes told “We have it covered internally and we’ll catch any escapes in simulation”. Then we’d wait (with confidence) for a call-back after a quality problem made it to silicon. But the “we’ve got it covered” argument is even more difficult to defend with hard IP. Simulation isn’t going to tell you that you have a mismatch between your GDSII and LEF models or an un-routable pin or many other potential problems in the long list of files now needed to describe a hard IP. Sure you’ll catch some of these in layout, but that’s way too late to be finding basic problems that could trigger major rework. And some will still escape to silicon.

    Where can errors happen? Perhaps in foundation (cell) libraries. These get pretty well shaken out unless you’re an early user but some cells at some corners could remain untouched until you stumble across a problem. Memory compilers are a more likely source for an undiscovered problem because they are configurable. Then there are internally sourced hard macros – PLLs, PHYs, ADCs and DACs, voltage references, special I/Os. And there are hardened digital IPs – ARM cores, GPUs and accelerators – optimized by an internal team for performance, for whatever power profile you need for this design and for layout footprint. In all of these cases it’s much more likely you will be the first user on any given incarnation of a block and that you may see multiple releases of a block before tapeout.

    What are typical errors? A small subset of examples, encountered on production designs, include:

    • Pin direction mismatch between views, pins in the wrong layer
    • Missing labels, or label spelling errors or labels in wrong layer
    • Abutment errors (layers do not touch outline)
    • Pins not on grid
    • Delay decreases with increasing output load, non-paired setup and hold times in Liberty file
    • CCS curves have more than one peak or a correction current in the tail
    • ECSM curves have large deviations between ECSM and NLDM values
    • Transistor bulk terminal connections in Spice incorrect

    Checking for these kinds of problem obviously can’t be left to visual inspection, across potentially terabytes of data, much less at each point in the design evolution where errors may have crept in. You could build tool-based scripts to check for some problems, but that gets messy when you want to check correspondence between a Verilog view, a layout view and a Spice view. And you have to worry about the correctness and currency of over 30 parsers. If you feel that building and maintaining that kind of infrastructure is a good use of your company’s time, go for it. Or you could take a look at Crossfire from Fractal Technologies.

    I first became aware of Fractal several years ago. I felt that Crossfire would be a really good fit with SpyGlass – SpyGlass for soft IP quality and Crossfire for hard IP. For various reasons we didn’t pursue that further, but not because I was unimpressed with the product or the company. I still feel Crossfire would be a great complement to SpyGlass. I also happen to know that, without naming names, some of the most significant design teams in the world use Crossfire. These teams are staffed by the best of the best. When they believe it is important to perform these checks, it’s worth taking note.

    You can learn more about Fractal and Crossfire HERE.

    More articles by Bernard…