RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

New details on Altera network-on-FPGA

New details on Altera network-on-FPGA
by Don Dingee on 08-28-2014 at 4:00 pm

Advantages to using NoCs in SoC design are well documented: reduced routing congestion, better performance than crossbars, improved optimization and reuse of IP, strategies for system power management, and so on. What happens when NoCs move into FPGAs, or more accurately the SoC variant combining ARM cores with programmable logic?

One of our own SemiWiki readers left this comment in a discussion on one of these SoC architectures a while back:

What would be interesting is some NoC tools that can abstract the buses away so that you are not stuck with a particular AMBA/AXI implementation and can use the FPGA fabric for communication transparently without knowing what buses are being used.

The academic community has also been contemplating the benefits for a while. Mohamed Abdelfattah gave an interesting talk in a University of Toronto seminar a couple years ago – his introduction lays out the benefits of NoCs over unstructured FPGA interconnects, and he raises a scenario of an FPGA-tuned hybrid hard/soft NoC and its advantages.

Point of that discussion: don’t just grab NoC IP and take the DIY route to lay it on top of an FPGA design. What is needed is a much more integrated approach, which delivers benefits with efficiency. Last year, Arteris announced that Altera licensed FlexNoC, and a lot of folks were wondering what that would look like. The press release gave some non-specifics about timing margin and frequency requirements, and we’ve been waiting for more to be revealed.

There may have been documentation floating around under NDA, but a few days ago Altera publicly updated the user manuals for the Arria 10 MPSoC as they ramp up from sampling (now) to general availability (soon). I’m not here to debate “industry’s only 20nm”, or the DSP capability, or the competitive timing – we’ll leave that for some other day. I want to focus on the difference the Arteris NoC makes when tightly integrated into an FPGA.

The new document of interest is the Arria 10 Hard Processor System TRM Chapter 7, System Interconnect. A big point of interest is the seven independent level 4 buses, each on its own clock domain. This allows data traffic to flow at multiple performance levels. To our reader comment from earlier, the L4 buses also support multiple protocols: AMBA AXI, AHB and AHB-Lite, APB, and Open Core Protocol (OCP).

Security is also right at the top. Using the firewall capability of the NoC, users can configure access privileges on a per-peripheral and in many cases a per-transaction basis. There are actually two layers of firewall on the SDRAM, one working with the accelerator coherency port of the ARM core, and a second used when cache misses occur. This could be a significant architectural plus in not only secure communications, but safety partitioned designs.

It is fast; one sentence says it all:

The main portion of the system interconnect runs at up to half the MPU main clock frequency (mpu_clk).

That would translate to 600 MHz, combined with an 800 MHz FPGA fabric clock. The NoC is not adding a lot of unwieldy overhead getting in the way of performance. There is also the aspect of NoC software abstraction to consider. It would be extremely difficult, not to mention slow and bloated, to recreate what Altera has done integrating the Arteris FlexNoC in this device.

In closing, I’d emphasize while the view from 50,000ft is similar – a dual ARM Cortex-A9 in an FPGA – the details of the Arria 10 MPSoC are quite different from that other device we talk about a lot. It’s hard to say a feature makes something clearly better or worse in the overall context; it really depends on what an application is trying to do that makes one architecture more suitable than the other. This network-on-FPGA approach may open some new doors, particularly in terms of the firewall capability, that were previously hard to implement.


Related articles:

Compositions allow NoCs to connect easier

A song of optimization and reuse


Granite River Labs and TSMC Expand Agreement

Granite River Labs and TSMC Expand Agreement
by Paul McLellan on 08-28-2014 at 7:01 am

For several years now, TSMC has run increasingly sophisticated IP validation. Ramping a new process as a foundry requires a number of things to all come together almost simultaneously: the process, of course, and some designs to run and start to recover the huge capital investment a modern fab entails. With many SoCs having over a hundred IP blocks, getting the IP qualified is an essential part of a design team being able to get a design into production. Taking a systematic approach to IP quality is paramount for successful SoC products.


TSMC’s latest IP validation has multiple steps, increasingly expensive to execute but with increasing confidence level in the IP. The first 3 steps are a review of the IP without manufacturing it. The later steps involve running extensive tests on IP that has been manufactured, typically in a shuttle run for a new process that is not yet in volume production. For more mature processes where a lot of IP has been in use for many years, the sheer number of designs in successful volume production is its own guarantee of IP quality.

[LIST=1]

  • Physical review (DRC, LVS, ERC, antenna checks)
  • DFM compliance (DFM-LPE, LPC, dummy fill, VCMP)
  • Pre-silicon assessment (design kit review, design review)
  • Silicon assesment (tapeout review, silicon report review)
  • Split lot silicon assessment (split lot tapeout and report review)
  • IP Validation Center (audit IP testing results by TSMC test lab)
  • Volume production

    Last month, TSMC’s IP Validation Center and Granite River Labs deepened their relationship and further expanded the TSMC9000 IP validation ecosystem. This covers expanded test capacity, test auditing and posting IP validation results on TSMC-online. This is a part of item #6 above, leveraging the expertise of GRL in the test and validation of high speed interfaces.

    GRL will serve as an IP validation partner to TSMC. The test methodology development and correlation will be done at GRL’s office in Hsinchu (where TSMC is headquartered of course). The bulk of the work will be carried out at GRL in Santa Clara and Bangalore. TSMC will subcontract to GRL to create a test methodology for the specific PHY. GRL can then use their extensive expertise and wide range of costly equipment to perform the testing. The results will then be available through TSMC-online like where it can be searched by potential users.


    GRL has extensive electrical test facilities using Introspect, Teledyne Lecroy, Tektronix, Keysight and others. They also hav protocol test solutions that can handle error injection, stress testing, protocol exerciser automation and so on. They have R&D sites in Oregon and Japan. Labs in Santa Clara, Bangalore, Penang, Hsinchu and Taipei. The Asian HQ is in Singapore, worldwide HQ is in Silicon Valley.


    More articles by Paul McLellan…


  • Xilinx UltraScale gives you 25% more packing than you know who…

    Xilinx UltraScale gives you 25% more packing than you know who…
    by Luke Miller on 08-27-2014 at 11:30 pm

    Coke with no ice. You see I am not cheap, or even frugal but a good steward. One of the things that I hate the most is waste. You know lights on in every room, door open during winter and driving 25 miles to save a dollar on gas.

    One will notice fairly quickly that with Xilinx UltraScale 20nm FPGAs coupled with the new-fangled analytical router that the Xilinx UltraScale FPGAs are very lean on waste. There is nothing more frustrating than to plan your FPGA design and only hit 50-60% full before one has timing and/or routing issues. Xilinx has a very good white paper just out that I would encourage you to read. It is wp455, ‘UltraScale Architecture: Highest Device Utilization, Performance, and Scalability’.

    I will quickly note here, the paper mentions the ‘competition’. Now, I do not want to be presumptuous here, nor name names, so I will not mention that the competitor is Altera, which would not be prudent, after all it could be Achronix, right? But certainly not Altera. Shucks, who am I fooling, it is Altera.

    A nice test case was run, both the Arria 10 (I assume) and UltraScale 20nm. Both used the SAME design code, from Open Cores and off you go. The results, as expected hammered the competition. See Below:

    Before you get all spun up here, BOTH devices had about the same logic cell density of about 1160K cells. This is a blind test, all things equal. No griping please. UltraScale roughly was able to use 25% more resources than Altera. This a real deal, and a big deal. Do you like paying for resource you cannot use? No one does! The test also highlights the differences not only in Xilinx’s ability to route better but the architectural improvements that are superior to Altera. Xilinx rebuilt its router and pretty much their FPGAs.

    The other highlight of the white paper comes in the form of scaling using Xilinx UltraScale. This means design migration from 20nm to 16nm. “For example, any UltraScale FPGA in a package ending D1924 is compatible with all other UltraScale FPGAs in D1924 packages. This strategy provides package footprint migration between Kintex UltraScale FPGAs and Virtex UltraScale FPGAs built on both 20 nm and 16 nm FinFET processes.” This is great as PBC rework is both costly and time consuming.

    Rounding out this white paper 455, is the fact that Xilinx’s UltraScale has ASIC like clocking. This is key, not only in timing closure but the ability to pack fuller, tighter designs at a higher clock frequency. So you can use more of the Xilinx FPGA, and more cycles in the Xilinx FPGA. That is a double whammy. Speaking of which, remember that show with the whammies? Big bucks, no whammies stop… I will leave this blog on a very corny note; if you want no whammies in your design, then may I encourage you to read up on Xilinx, and make the wise choice for your next design, or even your current design, it may not be too late to switch, you truly will not regret it.

    Also read:

    Develop High Performance Machine Vision in the Blink of an Eye


    Silicon Measurement Data Gives Insights to Using Metal Fill With Inductors

    Silicon Measurement Data Gives Insights to Using Metal Fill With Inductors
    by Tom Simon on 08-27-2014 at 4:00 pm

    Metal fill requirements for inductors are now a fact of life. Fill has long been seen as detrimental to device performance due to parasitic capacitance. The necessity of fill arises from the need to ensure planarization of dielectric layers by using chemical mechanical polishing. Without adequate fill, areas of the chip can suffer from uneven planarization.

    If using fill is inevitable the first question to arise is how can designers minimize its impact? The design community has had to rely on intuitive answers as to what the impact actually is and consequently how to reduce it. It is axiomatic that regardless of the level of impact, if it can be accurately modeled, then successful designs incorporating fill can be built. In the absence of a quantitative way to assess the impact of fill, designers are working in the dark, and assuming unwanted risk.

    3D Electromagnetic solvers simply are not up to rigorously solving for the hundreds of thousands of elements that are seen in filled inductor layouts. So naturally designers sought to eliminate fill from their designs. If fill is not present in the design then there is no need to accurately model it.

    Nevertheless Lorentz Solution has run PeakView to rigorously solve test cases of limited size to learn more about metal fill impacts on inductor performance. A variety of fill shapes and structures were used. Even eddy currents inside of fill elements was looked at. One of the first things learned was that adding stacked vias to metal fill creates large capacitive coupling to the substrate. This is intuitive, as the ‘plate’ is effectively moved to the bottom metal, much closer to the substrate.

    It is easy to comply with foundry rules for via fill without placing connecting vias in all the metal fill structures. Typically via fill density requirements are on the order of single digit percentages. So moving forward it was decided to focus exclusively on floating fill without inter-layer connections.

    At low fill densities there is a minimal bottom place capacitance shift when the fill is fully floating. To come up with data concerning higher densities and with fill on all layers, as would be seen in production silicon, Lorentz concluded that the only reliable way to proceed is with silicon data.

    Lorentzteamed up with Altera, TSMC and Mentor Graphics. Lorentz Solution designed inductors at 20nm and embedded them in test keys. A series of different fill shapes and densities were applied to the device under test. The silicon was fabricated and measured by TSMC, who generously agreed to assist in this effort. As the next step, Lorentz de-embedded the raw data and performed data analysis.


    What made this project useful is that PeakView has a method for simulating metal fill that dramatically reduces the size of the problem. This feature is in PeakView’s CMP package. The CMP package, in addition to handling fill intelligently, automatically merges slotting and striping commonly found in wide metals. PeakView’s CMP Package can automatically identify metal fill in designs. This alleviates the need for a manual step prior to EM simulation to remove fill from the layout. Once the fill is identified, the user can choose to have the simulation ignore it, or have it modeled.

    Silicon test chip data was used to show a good match with PeakView CMP results. The other valuable thing learned was that the fill densities and structures used had negligible impact on L and a very small impact on Q – and that was mostly from a slight addition to the parasitic capacitance.


    With hard data showing that fill can have a low impact and that this impact can be properly modeled, designers should be much more comfortable using recommended fill densities in their designs. Now let’s analyze the potential benefits of using recommended fill.

    When metal fill is used, the result is a more planar assembly of the dielectric and metal layers. The foundries provide detailed information on the process geometry in technology files that are used directly or indirectly in every aspect of the design flow. TSMC provides iRCX files and other foundries use ITF files for this purpose. Every tool that relies on this information relies on the fab producing silicon that conforms to the foundry specification. Simply put, using fill that is out of range can produce silicon stack up geometry that does not match the iRCX or ITF data provided. This exposes designers to an unforeseen risk because analysis tools may be working with input files that do not reflect what was fabricated.

    It is well understood that without metal fill, inductors may be the source of moisture infiltration into the die. After moisture enters the chip, it can rapidly move to device junctions, causing catastrophic chip failure. Some foundries insist that seal rings be used when fill is reduced to avoid issues arising from dielectric damage.

    Seal rings also take up room on the die, and do not always provide a significant beneficial design effect other than creating a moisture barrier. What if designers could remove seal rings? Suddenly it is looking like with the combination of stack up information integrity and area savings that fill might not be so undesirable.

    In conclusion silicon data shows that when fill is properly designed its detrimental effect is not significant. Further, the extent of this effect can be simulated effectively by PeakView’s CMP Package. Lastly there appear to be good reasons to maintain the fill densities that are recommended around and underneath inductors.

    The value of a thorough study of the effects of fill at 20nm has been shown useful in removing confusion regarding the role of fill in inductors at advanced process nodes.


    Broadcom Internet of Things

    Broadcom Internet of Things
    by Paul McLellan on 08-27-2014 at 7:01 am

    One of the perks of blogging here is being able to get a press invitation to lots of events, often in interesting locations I never even knew existed. Tonight it was a Broadcom event in SPUR here in San Francisco. The evening was about the Internet of Things (IoT). Everyone knows that IoT is sort of hype, but it is also a real opportunity. Not that there is just one market some big guy can dominate, but it is lots of little markets for stuff you would never think of.

    So here are a few of the “things” I saw this evening:

    • a wireless toothbrush. whenever you use it it uploads data on how long you brushed and where to the cloud. so you can track how conscientious you are. Or more likely, your kids are
    • a big red flashing light with a Bud logo on it, sold in Canada. You set it up with your favorite hockey team (Sharks round here!) and when a game is about to start or when the team scores the light flashes and it shouts for the team. I think it suggests you have a Bud too. They produced a few as a small project, they sold out instantly and since then they have made lots
    • a little toy for kids that records voicemail and when they reply it feeds it back to your mobile phone

    Broadcom was actually talking about a platform called Wiced. It is a prototyping kit for people wanting to build IoT projects. It has multiple sensors (temperature, humidity, gyroscopes etc) and Bluetooth connectivity. So you can be up and running in literally minutes. Although Broadcom are a chip company, of course, it comes with a full software stack and support for iOS and Android Apps, big data stuff in the cloud and so on. They announced the product just recently and already there are companies with Apps and more.

    A lot of the key IoS things are built-in: tamperproof encryption, authentication and privacy controls. In some IoS areas such as games we don’t care that much about this stuff. But other areas, such as medical devices or our cars then we care a lot. These are life critical areas, to say the least.


    One thing that makes IoT so interesting is that there are no large dominant companies. It is an open field. Lots of devices and ideas, thousands of players, no monopolies, and a low barrier to entry for startups. Some parts of the business will end up being SoCs I’m sure, but for now most of it is integrating microcontrollers, sensors and more. Get the Broadcom chip and add software, for example. If it is a big success you can cost reduce it later. But for now it is all about getting stuff to market and seeing what people are interested in. As the old saying goes, throw it against the wall and see what sticks.


    More articles by Paul McLellan…


    Do you check your circuit DC stability?

    Do you check your circuit DC stability?
    by Jean-Francois Debroux on 08-26-2014 at 8:00 pm

    Most analog designers are aware of loops stability. In most cases, stability is understood as AC stability, the goal is ensuring enough phase (gain) margin so as to avoid the loop to enter oscillation. But prior to studying AC stability, DC stability should be questioned. What is that DC stability only few people think of?
    Continue reading “Do you check your circuit DC stability?”


    Opting for ARM software scalability

    Opting for ARM software scalability
    by Don Dingee on 08-26-2014 at 12:00 pm

    Behind much of the success of ARM architecture is a scalable software model, where in theory the same code runs on the smallest member of the family to the largest. In practice, there are profiles, and a variety of hardware execution units, and resource constraints in low power scenarios that enter the picture. As a result, operating systems have evolved very differently.

    Going “bare metal” or with a very compact kernel solves some of the problem at the low end; developers can work close to the hardware and #ifdef around support for variations in resources. If one needs more advanced features, such as graphics, connectivity stacks, and virtualization, or is hoping to build on value from somewhere in the open source community, an operating system with a defined set of APIs becomes much more important.

    Coming from the other direction, full featured operating systems haven’t scaled down well to microcontrollers, with the biggest roadblock partitioned memory requiring an MMU implementation. The ability to virtualize memory and harden against task crashes is a huge plus, as the popularity of Linux and Android attest. With improvements in processor speed, the term “real-time” on larger cores has become more of an issue of controlling background tasks instead of interrupt response and context switching times.

    Scanning the commercial and open systems offerings for operating systems usually provides a very different answer in support for ARM Cortex-M versus ARM Cortex-A, even leaving out the latest 64-bit ARMv8 discussion. There are, of course, a handful of operating systems that straddle the boundary, generally in the vein of microkernels scaling up.

    One of the optional hardware execution units in the Cortex-M architecture is a memory protection unit (MPU), available on Cortex-M4, Cortex-M3, and Cortex-M0+ cores. It provides eight protection regions, which can implement the basics of access rules and task protection running out of flash without creating a full-blown virtual memory model and the complexity of MMU programming.

    Mentor Graphics has just extended their Nucleus RTOS into this space, offering the same reliability from their experience with industrial and medical applications in MCU territory. By leveraging the MPU, they are bringing the same process model across the ARM spectrum. Developers still have access to the more advanced features of Nucleus on bigger ARM cores, including a multicore framework for mixing and matching operating systems.

    The key observation here is in smaller devices, applications often don’t need a huge number of tasks and memory partitions – but having just a few may make a difference between either risking integrity of an application, or having to go up to a much bigger and hungrier SoC. MCU vendors are generally seeing the light, offering the optional MPU on Cortex-M as standard fare, and enabling a lightweight version of the same techniques previously only on larger Cortex-A cores. It is another example of how MCU and SoC space is starting to blur.

    Hardware and software teams need to carefully think through use cases when deciding what to include and what to omit. There may be short term savings in opting out of some execution units when tailoring an ARM implementation, but in the long term those features may emerge as very valuable. We will likely see more of these fine-grain decisions – memory protection, encryption, and DSP extensions among the candidates for support – favoring inclusion moving forward, helping software scale better across ARM processor families.

    Related articles:

    The Secret Essence of an IoT Design

    More “toddlers” innovating on the IoT


    Mentor: It’s All About Cars

    Mentor: It’s All About Cars
    by Paul McLellan on 08-25-2014 at 7:08 pm

    Mentor’s results came out last week. They were good. Wally opened the call:Thanks. Once again results for Mentor Graphics in the quarter exceeded our guidance. Revenue of $260.2 million and non-GAAP earnings per share of $0.23, were ahead of our guidance of $250 million and $0.15 earnings per share. Strength in bookings for the quarter was almost all system design-related, particularly automotive customers.

    Automotive seems to be an area where Mentor is doing very well. In my experience Auto is a difficult market. You walk in and they tell you they are working on the 2023 model year. Or their R&D people say that the tools they are evaluating will be adopted in another four years. Of course EDA has a long sales-cycle, 9 months or so, whatever, but automotive makes that seem fast.

    But Mentor seems to have cracked the nut. 90% of Japanese bookings come from automotive OEMs (that’s the buzzword for people who actually build cars like Honda and Toyota) and their suppliers. Overall bookings from automotive companies were 3X what they were last year. That is a huge increase. Not 30% but 300%. Two of the top 10 bookings for the quarter were tier-1 Japanese automotive suppliers. Because of the sales-cycle it has been a long time coming but now Mentor is riding the wave and the competitors are out looking at the incoming swell.

    Last quarter Mentor also acquired XS Embedded, a leader in embedded solutions for automotive driver information, infotainment and advanced driver assistance systems. They also acquired Nimbic, a cloud-solution for electromagnetic simulation, probably most famous for Raul Camapasno, exe-CTO of Synopsys, being the CEO.

    Let’s hear Wally, Mentor’s CEO, say it:Mentor serves this market in many areas, but of the automotive-specific products there are three that constitute most of the revenue. First, capital family of wire harness product provides a complete enterprise solution from concept through design, manufacturing, costing and after sales service. This family has evolved over the years and now serves more than 35 automotive OEMs, 75 Tier-1 suppliers and nearly 30 heavy equipment suppliers.

    In addition capital is used by several dozen aerospace and electronic systems companies, including more than half of the leading commercial, regional and military aircraft OEMs. Fiscal Q2 was an all-time record for the Capital family of products.

    Second, Mentor’s embedded software products and services for the automotive industry have grown rapidly in the last two years and set a record for second quarter. This family includes a complete Linux-embedded software development environment for automotive applications, which in some cases may exceed 100 million lines of codes.

    Third and most recently has been the explosion in demand for autos AUTOSAR software development tools, reflecting the broad adoption of this new methodology by the automotive industry. 10 OEMs and 32 Tier-1 suppliers have deployed Mentor’s AUTOSAR solutions in 119 production car models and Mentor was the first company to offer a full AUTOSAR development suite.

    One of the top-10 bookings of the quarter was all driven by AUTOSAR completely. This product line also experienced all-time record bookings this quarter and we expect the industry overall to source hundreds of electronic-controlled unit with AUTOSAR this year.

    Of course it is not all automotive. Greg Hinckley, my old boss at VLSI Technology, filled out the color.Strength in automotive accounts drove second-quarter revenue and non-GAAP earnings per share to levels ahead of guidance. Record Q2 revenues of $260.2 million were $10.2 million over guidance and 3% over last year. With continued attention to expenses, all of incremental revenue fell through to earnings driving non-GAAP EPS as Wally said to $0.23, $0.08 ahead of guidance. This is the 22nd consecutive quarter we have exceeded our earnings guidance.

    So Mentor are in good shape and have a strength in automotive that none of the other EDA companies can claim.

    More articles by Paul McLellan…


    Secure at any IoT deed

    Secure at any IoT deed
    by Don Dingee on 08-25-2014 at 3:00 pm

    In his classic book “Unsafe at Any Speed”, Ralph Nader assailed the auto industry and their approach to styling and cost efficiency at the expense of safety during the 1960s. He squared up on perceived defects in the Chevrolet Corvair, but extended his view to wider issues such as tire inflation ratings favoring passenger comfort over handling characteristics.

    History has not treated Nader’s work kindly, possibly because of his politics including a crusade on environmental issues which spurred creation of the US Environmental Protection Agency. Sharp criticism of Nader’s automotive fault-finding came from Thomas Sowell in a book “The Vision of the Anointed”. He targeted “Teflon prophets,” Nader foremost among them, who foretell of impending calamity using questionable data, unless government intervenes as regulatory savior.

    Sowell’s most scathing indictment of Nader was for failing to understand the trade-off between safety and affordability. Others targeted Nader’s logic by suggesting some non-zero level of risk and injury is acceptable if society progresses, supported by data the Corvair was actually no worse in terms of safety among its contemporaries on the automotive market at the time.

    Yet, almost five decades later, we have Toyota sudden acceleration damage awards, GM ignition switches and massive recalls in progress, and the prospect that someday soon an autonomous car may go haywire. The problem seems to be not errors of commission, but errors of omission; complex engineering requirements, design, and test are becoming increasingly difficult. Getting all that done at volumes and prices needed to drive model year expectations and consumer market share is a big ask.

    In an industrial context of the IoT, “safety critical” design is a science, with standards, and certification, and independent testing. In application segments such as aerospace and defense, medical, industrial automation, and others – even the automotive industry, which has made huge strides in electronics and software development – safety and risk are proactively managed.

    Security of consumers on the IoT is another matter. Devices are inexpensive, often created by teams with little to no security experience. Worse yet, there is a stigma around many security features as unnecessary overkill that would slow down performance, get in the way of usability, or increase costs beyond competitiveness. This is an accident waiting to happen.

    Or perhaps, one already in progress, if we believe the recent study on firmware in a sampling of consumer devices. A lot of folks think benevolent hackers are also polytetrafluoroethylene-coated, but it is hard to dispute there is cause for concern among embedded devices when it comes to security – especially when those devices connect to networks.

    One of the areas cited in the study is encryption, and some rather sloppy handling of keys when it is used. Across the industry, embedded software is wildly inconsistent in approaches to encryption. As the study points out, developers are prone to stamp out copies of aged, flawed solutions because they are comfortable with and invested in a particular approach.

    Regulation is the last thing we need here. Engineers need a lot more education, starting from the basics of including and using hardware encryption units on MCUs and SoCs, through the state-of-the-art knowledge in cryptography and certificate management, and up to IT-style approaches such as over-the-air software updates and two-factor authentication.

    We also need some deeper thought on encryption implementations, beyond just NIST recommendations. In a web context, we have Transport Layer Security (TLS), but that protocol requires a full IP stack and a lot more horsepower than many small embedded devices can afford. On top of that, hardware encryption is currently very vendor-dependent. Vendors like Atmel are working with ARM on TrustZone technology to create newer implementations based on Trusted Exectuion Environment APIs, tuned for IoT devices instead of data center use.

    Historically, encryption has been applied to securing closed systems – the IoT presents a paradox. If it devolves into a myriad of smaller, effectively closed systems that only intermittently share data, we may gain some benefit, but will never reach the vision.

    The best case scenario is an effective set of industry practices emerge for encryption in consumer IoT devices before problems become widespread, defeating the very purpose of sharing data with the cloud. We need developers to not avoid encryption, but for that to happen it has to be cost- and implementation-effective for easier use.

    Related stories:

    The “Key” to Reality

    Is this thing real? Symmetric authentication will tell you!


    The EDA Ice Bucket Challenge Just Got Real!

    The EDA Ice Bucket Challenge Just Got Real!
    by Daniel Nenni on 08-25-2014 at 7:00 am

    Raising four children is no easy task, believe me. My beautiful wife and I always felt it was important to foster the charitable side of our children by volunteering at the food bank, cleaning up local waterways, and other activities we could do as a family. To be clear, that is why my family did the ALS Ice Bucket Challenge.

    “It is amazing to me to think that a little bit of water and ice could lead to a cure to ALS” EDA veteran Rob Chadwick who was diagnosed with ALS in 2007.

    Most of the email I have received about the EDA Ice Bucket Challenge thus far has been positive but there will always be negatives out there and that is just something I have learned to live with. When I posted the first blog I had no idea if any of the people I challenged would actually do it so I slipped in a ringer, Mike Gianfagna, because I knew he would. Mike has a big heart and a great sense of humor and he did not disappoint. Mike also made a generous donation to ALS:

    Mobile: www.semiwiki.com/forum/files/Videos/MikeGIceBucketChallenge.mp4

    The second challenge video I received was from Mentor’s Brian Derrick and it actually brought tears to my eyes. Knowing how my daughter felt about being part of my challenge I can only imagine how this man’s children felt being part of Brian’s:



    Mobile:www.semiwiki.com/forum/files/Videos/ALSIceBucketChallengeMobile.m4v

    Rob Chadwick is one of our own. He worked for Mentor Graphics up until his ALS diagnosis in 2007. Just so you know, ALS results in paralysis and death via respiratory failure usually within five years of diagnosis. My goal in life is to see my grandchildren through college which will bring me into my 80s. Rob’s goal is to see his children through high school which brings him to about 50. You can read his story HERE and if you do decide to donate please use the link on that page so we can see the donations grow. They are about halfway to their financial goal of $68,000 so each and every one of us can make a difference here, absolutely.

    If you want to be an active part of this amazing effort do not wait to be challenged, challenge yourself and challenge others! I will publish all videos sent to me on SemiWiki and will continue to do my best to get the word out for Rob and his family.

    Also Read: EDA Ice Bucket Challenge!

    The Community of Hope is an online community of tribute funds created by ALS advocates who want to establish a lasting legacy in honor or memory of someone special’s battle with the disease. Tribute fund webpages are fully customizable, allowing you to set a fundraising goal and track your progress; upload the story of your ALS connection, including photos and video; and continually engage your network by communicating through email, social media, and your very own Community of Hope blog! It’s quick and easy to get started, and we’re here to help every step of the way! For more information about the Community of Hope and The ALS Association, check out the ALS Resource Center today.