RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

On-chip Firewall

On-chip Firewall
by Paul McLellan on 04-22-2014 at 8:00 am

We have had the Snowden revelations that the NSA has gone rogue, Target lost a zillion credit cards, the Heartbleed bug meaning that main security protocol of the internet had been coded up wrong for a couple of years, theft of records from RSA and more. One result is that people do not completely trust a security system that depends only on software. It is too easy to break into it. People want to see that low levels of security in hardware where it is out of the reach of software breakin.

But that creates a new problem. How do you make sure that the hardware implementation isn’t equally full of loopholes. Oh, and if that isn’t enough, coming soon, the internet of things (IoT) with 50 billion devices and more on your body, in your body, in your home, your car. It is one thing if your smartphone crashes but bugs in your insulin monitor or blood-pressure monitor are more serious. Plus, while I might not care that people know I checked into a restaurant on Facebook, I don’t think I want them knowing my blood pressure hour by hour, never mind altering my insulin dosage (actually I don’t take insulin but you get the idea).

Sonics network on chip (NoC) has a lot of security features that can be enabled. A very basic one, for example, is that block A can only receive signals from block B. If block A is where the secret keys are kept, then making sure that nobody can access it from the wireless network is a big step to guaranteeing security.

Handling errors is another big one. It is almost impossible to guarantee that every block will always behave perfectly on the chip, but again the NoC sits between all the blocks and is in a position to ensure that errors do not propagate. A particular kind of error is starvation, where data get blocked because other data consume all the bandwidth, and again the NoC firewall-on-chip is the traffic cop to guarantee terms of service.


My last blog on NoC security, which goes into more under-the-hood details, is here. But I wanted to leave some room for another topic.

And now for something completely different.

This morning Gartner let Sonics publicise data for the on-chip interconnect segment. Per Gartner’s numbers, Sonics is the 7th biggest IP company and since the first 6 are all public that means they are the largest private IP company. Their revenues grew from $25M in 2012 to $36.2M in 2013, up 44.8% to a market share of almost 2/3. Mentor were next (I’m not even sure what they sell that falls in this classification, they don’t have a NoC) shrinking “only” 26.7% to end up with 21.3% market share. And Gartner reckon that Arteris went from $17.0M to $5.7M ending up with just 10% market share. One obvious question is whether that is partially due to the Qualcomm acquisition of Arteris’s technology and team, which presumably by the end of the year anyway, took that business out of the merchant market. But I can’t believe Qualcomm were 2/3 of Arteris’s revenue so that doesn’t completely match.


Of course Gartner will sell you the whole IP report Market Share: Semiconductor Design Intellectual Property, Worldwide, 2013

The full press release should be on this page.


Importance of Data Management in SoC Verification

Importance of Data Management in SoC Verification
by Pawan Fangaria on 04-22-2014 at 6:00 am

In an era of SoCs with millions of gates, hundreds of IPs and multiple ways to verify designs through several stages of transformations at different levels of hierarchies, it is increasingly difficult to handle such large data in a consistent and efficient way. The hardware and software, and their interactions, have to be consistent through appropriate files and interfaces. The SoC has to be verified perfectly, with all corner cases covered, in order to avail the short window of market opportunity while keeping the NRE (Non-recurring Engineering) cost within limits. An SoC that was working with a particular set of design database, scripts and several files may no longer work due to mismatch in the version of a single file. This may introduce a significant burden on design and verification team to sort out the issue, waste time and effort, and ultimately impact designer productivity, cost overrun and turn-around time.

While going through a research thesis from the University of Michigan on design and verification of digital systems, I noted this flow of semiconductor design cycle from specification to manufacturing and packaging of chips. The actual flow generally is more complex than this, including virtual prototyping above RTL and several sub steps and iterations at every stage. The verification at every stage can include several aspects – function, timing, power, reliability (electro migration, EMI, thermal etc.), physical layout, and area and so on. And these are achieved through several means such as logic and timing simulations (static and dynamic), Spice level simulation, formal verification, equivalence checking, power verification and optimization, DRC, ERC and emulation for firmware and so on. Several trial and error iterations are made throughout the flow. Several ECO (Engineering Change Order) loops are introduced at the final layout stage, where the design is vulnerable to the introduction of inconsistencies between the layout and RTL or any intermediate stage. An important criterion is to keep checking the consistency of the design between different stages of the flow; for example, equivalence between RTL and gate level netlist, layout versus schematic and netlist and so on. Test vectors and test benches have to be generated and maintained.

In such a scenario, where test data is as much or even more significant as the design data, proper data management and configuration is a must. What if the final SoC integrated with hundreds of IPs started failing at a particular output point? Obviously, there would be a heavy cost in checking through every IP. If data configuration/management is in place, the verification engineer can safely extract the older version of required files and database, and debug the isolated case with minimum effort. At this point, I would like to point out that most of the hidden bugs appear at the final top level, which can be very frustrating and expensive to solve if you don’t have a design data management system in place. You may be inviting a situation where you are left with no choice other than to compromise on the overall quality of the chip.

During the design cycle, different levels of hierarchies are created and collapsed depending on the way the design evolves and is optimized through several partitioning and merging schemes. This invites robust data management which can record various hierarchical configurations and consistency of data across several levels of hierarchies. With serveral versions of behavioral models, test benches, simulation vectors, RTL/Gate netlist etc., verification engineers have started versioning snapshots of working verification setup. Given the short time frame engineers have for validating the design, the snapshots form a safety net for them to fall back on in the event something goes wrong in their verification setup. Taking the snapshots of their working model also helps them debug any issues they may face in their current setup. It also helps them narrow down the changes to specific files.

As the design size increases, the design and verification team size also increases. The team may be spread across the globe, thus requiring the data management system to control the access of data across multiple teams, either at the same site or at worldwide sites, in order to maintain the data integrity.

Design data management is not only extremely important for SoCs, but also for IP/IP subsystems. Designers developing the SoC/IP are typically spread over different design sites across geographical boundaries and have to collaborate closely together to meet the ever shrinking time to market window. Herein, having a design data management is extremely useful for versioning, release and derivative management. The core flow of design and verification is more or less the same for IPs as for SoCs. And the quality of IPs plays a significant role in the overall quality of the SoC that contains them. Proper data management of IPs can facilitate more efficient re-use in various SoCs.

As I read the paper, I couldn’t help but think of companies like ClioSoft, which gives prime importance to the above aspects in the overall semiconductor design flow within its design data management solution. The SOS platform provides flexible administration of data through easy-to-use GUIs; world-wide real time access to data, very quickly; protected sharing of design data, libraries, design kits and IPs; release and derivative management and, of course, revision control of data. ClioSoft also provides an innovative tool, VDD (i.e., Visual Design Diff) which can point out the differences between two schematics or layouts, thus helping the designers track mistakes propagated through schematics or layouts.

Also Read

The CAD Team – Unsung heroes in a successful tapeout

Cliosoft Grows Again!

High Quality PHY IPs Require Careful Management of Design Data and Processes


Ten Innovative Debugging Techniques – Pre & Post Layout

Ten Innovative Debugging Techniques – Pre & Post Layout
by Pawan Fangaria on 04-21-2014 at 8:00 pm

In a complex world of SoCs with multi-million gates and IPs from several heterogeneous sources, verification of a complete semiconductor design has become extremely difficult, and it’s not enough. In order to ascertain the right intent of the design throughout the design cycle, debugging at various stages of the design cycle has to go hand-in-hand along with the design and verification; the architect of the design should be able to make expert judgement and take appropriate action before a small weakness in the design can get amplified further down the design cycle. In such a scenario with high complexity, size and density of a semiconductor design, there is nothing like having tools which can help designers debug the design by visually showing them the portions of design, their interconnections and interfaces, associated code and results, and so on at any stage of the design and at the level of granularity they desire.

I admire such set of tools equipped with very practical and useful capabilities and offered in a comprehensive integrated platform by Concept Engineering. It’s an opportunity to learn about the capabilities of these tools and how they can be leveraged through specific techniques to effectively debug large SoCs at pre-layout as well as post-layout stages through a free webinaron 29[SUP]th[/SUP] April, hosted by EDA Direct. It must be noted that more than 75% of top 20 semiconductor design houses are gaining significant benefits in their design productivity by using these tools and have exercised them through some of the largest semiconductor designs in the industry. The tools are versatile enough supporting most of the industry standard formats that include Verilog, SV, VHDL, EDIF, Spice, HSpice, Spectre, Calibre, CDL, DSPF, RSPF, SPEF, Eldo, PSpice and IBIS.

So, what are the debugging techniques to learn from this webinar? Check the summarised list below which will entice you to sign-up for this webinar.

1. Render schematics on the fly for RTL, Gate or Spice level netlists to understand circuit functions in easiest and simplistic manner.

2. Extract, navigate and save fragments of circuits as Spice netlists with the ‘cone view’, for re-use as IP or external use in partial simulation.

3. Drag & drop selected elements between all design views (schematic, cone, parasitic and source code view) to cross probe and shorten debug time, especially during tape-out for full-chip debug.

4. Automatically create digital logic symbols and schematics from pure Spice netlists for easy design exploration.

5. Visualize and analyze post layout parasitic networks (in DSPF, RSPF or SPEF format) and create Spice netlists for critical path simulation and analysis.

6. Recognize CMOS functions easily by instantly turning off parasitic structures in Spice circuits to remove the clutter around transistor symbols.

7. Export schematics or their fragments into CadenceVSE (Virtuoso Schematic Editor) for further optimization and debugging.

8. Perform ERCchecking by verifying and debugging connectivity; identify floating input or output nets, heavily connected nets etc. especially in multi fan-in and fan-out structures.

9. Generate easily comprehensible design statistics and reports.

10. Extend the functionality of SpiceVision to match specific project needs by interfacing with the open database through tcl scripts.

There are many other features in these tools beyond the ones listed above; for example, clock tree analysis, timing back-annotation, integrated waveform viewing etc. One can see through a circuit what she wants to see. The clean handling and performance of each operation in the tools is remarkable. One can tell only after watching it.

It’s worth attending this webinar to know and appreciate about the actual capabilities of these set of tools working at transistor, gate and RTL level with all kinds of design views and at mixed-signal and mixed-language level.

Register herefor a one hour session on 29[SUP]th[/SUP] April at 10 AM PDT.

Contact sales@edadirect.com for any more information you may require.

More Articles by Pawan Fangaria…..

lang: en_US


Cadence Acquires Jasper

Cadence Acquires Jasper
by Paul McLellan on 04-21-2014 at 4:06 pm

Cadence announced today that it is acquiring Jasper Design Automation for $170M in an all-cash offer. Jasper has $24M in cash so it is really an acquisition for around $145M. i think that is around 4X revenue but I only know rumors about Jasper’s revenue numbers.

All the big 3 already have their own formal technology but the technology the leading companies seem to depend on most heavily is Jasper’s JasperGold technology, which is perceived as having the most advanced technology. In this business, perception is usually reality due to evaluations and benchmarks; brand recognition counts for little and Jasper is already large enough that it is not a risk to choose them.

Cadence will put the Jasper technology together with their existing Incisive technology. In fact, they will put it together will all their verification technology: Verilog simulation, Palladium emulation, virtual platforms and more. The theme in verification these days is to take all the various approaches and unify them so that they use the same debuggers, same user interfaces, take the same inputs, assertions and then put some sort of metric-driven methodology together so that the most appropriate technology is used without overlap (so you don’t waste time using simulation to test something that has already been formally proven, for example).

Formal in general, and Jasper in particular, has improved incredibly in the last few years. The technology has gone from one that was underpowered and pretty much required a PhD in formal to get the most out of. The power of the analysis engines has increased almost exponentially, and the ease-of-use has meant that it has become completely mainstream. Couple that with the size of designs meaning that simulation is running out of power, and you have the basis of a big fast-growing business. I have no doubt that Cadence’s much-larger sales force will make this a very successful acquisition, and not one that requires 5 years to pay off.

Or, to say the same thing in marketing speak:The combination of Jasper and Incisive Formal technologies and expertise will result in the most complete formal and semi-formal offerings in the industry. With its broader verification portfolio and worldwide field team, Cadence has the opportunity to accelerate the expansion of the emerging formal analysis sector as more mainstream customers adopt Verification Apps for IP and SoC development.

The full Cadence press release is here.


More articles by Paul McLellan…

<script type="IN/Share" data


NVM central to multi-layer trust in cloud

NVM central to multi-layer trust in cloud
by Don Dingee on 04-21-2014 at 4:00 pm

Pop quiz: Name one of the hottest applications for non-volatile memory – A) processor and code configuration; B) RFID tags; C) secure encryption keys; D) all the above. The answer is D, but not in the way you may be thinking; a new approach is using all these ideas at once, combined in SoC designs targeting advanced security for cloud-based computing. Continue reading “NVM central to multi-layer trust in cloud”


GlobalFoundries Gets a 14nm Process

GlobalFoundries Gets a 14nm Process
by Paul McLellan on 04-21-2014 at 10:00 am

I went to a briefing last Tuesday where Samsung and GlobalFoundries announced that they have the same process at 14nm. Dan already wrote about itso it is old news in one sense. But I really think people underestimate its importance. In essence, reading between the lines, Samsung is licensing GF their 14nm process. This is driven by large customers (of course officially nobody can say who but Qualcomm and Apple would be good guesses) who don’t want all their eggs in the Samsung basket, especially since Samsung competes with them in the biggest end-market of them all, mobile.

How similar are the processes? They just have one PDK for both companies, the designs are completely portable. I don’t know whether GF is doing a sort of Intel copyexact whereby they put everything into their Malta NY fab just the way Samsung has in the Austin TX fab. The presentation says that all fabs are in sync involving materials, process recipes, integration and tools.

I think this is really big news. GF has only recently started to really ramp 28nm which means that TSMC had a free run for a couple of years and was really the only game in town for most people (yes, Apple manufactured at Samsung but the average customer isn’t going to get Sammy’s attention). This meant that there was no price competition at 28nm plus a lot of systemic risk when only one company is building so much (if one of their fabs was damaged in an earthquake or a fire, for example).

At 14nm GF/Sam there will be 4 fabs that can build 14nm wafers. Samsung’s S1 in Korea for prototypes, and then S2 (Austin) and S3 (Korea) and GF’s fab 8 in Malta New York. 28nm will continue to be run mainly in Dresden Germany. I didn’t think to ask whether Fab 8 has actually run 14nm silicon yet although they did have a 14nm wafer at the presentation (although to be honest, a wafer is a wafer, it’s not like FinFET looks different to the naked eye).


Another important fact is that Samsung/GF claim that their area is more than 15% smaller than “other foundries” for which we can read TSMC. The BEOL (metal fabric) on the process is the same as 20nm so I don’t know if this 15% is very dependent on how much memory is on the chip. They say they have the smallest memory solution and innovative layout schemes for compact logic.

I asked if this was a cost-reduction node and nobody would commit that it was. Clearly the process has lots of value for some markets with lower power, enormous density and so on. My rule of thumb for process technologies, though, is that if nobody says the cost per transistor is going down then it isn’t (or at least not much).

What is the timing on all of this? 14LPE is already qualified. 14LPE and 14LPP (more performance, lower power) PDKs are already released. MPW shuttles are available (presumably in S1). Prototyping now. Volume production by end of the year.

GF had process development going on already for 14nm and that is planned to continue. I’m not sure that will turn out to be true, if all the customers opt for the two 14nm processes covered by this announcement. GF has had almost unlimited amounts of money from Abu Dhabi but just money to spend doesn’t get you a competitive process in a competitive timeframe.

The reason that this is such a big deal is that, based on their performance at 28nm, GlobalFoundries was uncompetitive, too late to market to get more than the crumbs from TSMC’s table. Any semiconductor company only makes money if its fabs are close to full and theirs was not. If they didn’t get competitive by 14nm then the future of the company was dubious. Abu Dhubious.

This deal with Samsung means that they are back in the game. At 28nm, TSMC had over 80% market share. At 14nm (TSMC calls it 16, process names are just silly these days since there is nothing 14 or 16nm around) it will be more like 50% since I expect that the high volume customers will spread their love around.


More articles by Paul McLellan…


Can the NSA Get Into Your Chip?

Can the NSA Get Into Your Chip?
by Paul McLellan on 04-21-2014 at 2:49 am

At DVCon Lawrence Loh and Viktor Markus Purri gave a tutorial on Formally Verifying Security Aspects of SoC Designs. Lawrence is the direector of WW application engineering and Markus is an FAE who specializes in security verification.

I’m not going to attempt to summarize an entire half-day tutorial in under 1000 words, but here is the big picture story.

To motivate the importance of security, they started with some events from the press.

  • Automotive: At the upcoming Black Hat Asia 2014 conference, a pair of Spanish security researchers will demonstrate a smartphone-sized circuit board dubbed the ‘CAN Hacking Tool’ (CHT), which they claim will let them remotely take partial control of many vehicles over a wireless Bluetooth connection.”
  • Medical: In two weeks of work [a researcher from McAfee] found a way to scan for and compromise insulin pumps that communicate wirelessly. … ‘We can influence any pump within a 300ft [91m] range.’”
  • Consumer Electronics: After its Playstation network was shut down by LulzSec, Sony reportedly lost almost $171 million. The hack affected 77 million accounts and is still considered the worst gaming community data breach ever. Attackers stole valuable information: full names, logins, passwords, e-mails, home addresses, purchase history, and credit card numbers.”

A lot of security is going to be implemented in software, but if it is not built on a secure hardware base then it won’t be security. After all, security is like a chain, only as strong as its weakest link. Hardware architectures are emerging to deal with secure information, such as ARM Trustzone. But just by looking at the diagram below with some secure links and some insecure, it is clear that it isn’t clear if it is secure!


The security provisions come down to two simple propositions when you look at a block with inputs and outputs:

  • can the secure information appear at the outputs (e.g. steal the keys)
  • can the inputs be used to overwrite the secure information (e/g/ change the keys)

There are a number of manual approaches that can be used with regular formal verification to do this, but they are tricky and error prone to set up. For example, all the input might be sensitized with one value and a different value in the secure registers. Then prove that the output can only be the sensitized value and not the secure value.

Jasper’s Security Path Verification App is a completely automated approach to this problem. Since it is formal it can identify unintended paths exhaustively. Anyone can build a security system that they themselves cannot break, it is a common pitfall in security. But this proves that it cannot be broken.


The App takes as inputs the RTL and a list of undesired paths to be verified. The App does everything automatically from there to derive and generate all the properties. The output are counter-examples that show undesired data propagation (if there is any).

The App uses Jasper’s path sensitization technology. Internally the tool inserts a unique tag called the taint. It then checks where the taint can propagate to in the design. If the taint goes from the input and makes it to the output then there is a path through the design.

The tutorial contained lots of case studies, ranging from making sure that device registers could not be used to gain unauthorized access, or scan chains to read a secure memory, or that a secure microprocessor really was secure (it wasn’t).

There are a couple of white papers on the security App on the white paper page here. There is no video of the whole tutorial as far as I know.


More articles by Paul McLellan…

<script type="IN/Share" data


Maker Faire San Mateo

Maker Faire San Mateo
by Paul McLellan on 04-20-2014 at 9:30 pm

A few years ago my then-girlfriend was an artist and she had some friends who were in the maker movement, one who ran a tool “lending library” and so on. So she wanted to go to the Maker Faire, which is a huge event held in San Mateo exhibit center. In those days it was more like an outgrowth of burning man but there were already early 3D printers, machine tools and some electronics around. The end of the day had the most amazing “mentos in coke” show you can imagine, set to music.

The Maker movement is a passionate one, and Atmel is passionate about being a part of it. So drop by with your family and invite your customers to stop by the Atmel booth to:

  • Hack some hexbugsand see a uTot robot platform with Bob Martin
  • Use a MakerBot 3D Printer
  • Visit with Quin Etnyre, 12 year old CEO of Qtechknow, and see his demo on Qtechknow Olympics – fun robotics challenges for all ages, using Arduino, XBee, and FuzzBots!


This year it is May 17th and 18th in just a few weeks. With a lot more electronics and a lot more 3D printing.

Atmel has been doing a lot of work with the maker movement, two things in particular stand out.


The first is 3D printing. Almost all 3D printers are actually driven by Atmel microcontrollers. Her, for example, is one inside the Atmel Tech on Tour truck when it came by San Francisco recently. It was (I think) just a demo part that it was making, but something that would be difficult to make with other processes like injection molding. 3D printers are a very big thing in the maker movement since, almost by definition, things are being made in very small volumes. Plus it becomes possible to have open-source hardware designs: just upload the design files and print them, the 3D equivalent of stock photography.


Talking of open-source hardware, then there is Arduino. This is an open source electronics prototyping platform based on flexible easy to use hardware and software. I first heard about it as the Linley Microprocessor Conference a year or two ago. It is open source so you can build it yourself, although you can also purchase it pre-assembled. It is built around either an 8-bit Atmel AVR or a 32-bit Atmel ARM processor. It costs about $30 although clones are available for under $10. Apparently it is estimated that there are about 700,000 boards in users’ hands. Arduino actually originated in Italy in academia but has since spread all over the world.

Obviously this is something of great interest to the Makers who can very cheaply add electronic control to pretty much anything without having to design their own circuit board, write their own operating system and so on.

Atmel is a sponsor of the Maker Faire. Tickets for Maker Faire are here.


More articles by Paul McLellan…


Another Intel Slide Debunked!

Another Intel Slide Debunked!
by Daniel Nenni on 04-20-2014 at 4:00 am

This was one of the most memorable keynotes I have seen, absolutely. Probably because it supports my belief that the infamous Intel slide that “projected” Intel will continue a linear manufacturing cost per transistor improvement at 14nm and 10nm is pure marketing fluff. Even more interesting, according to Intel, other semiconductor manufacturers can’t continue but Intel can. I knew this slide was complete nonsense but it was impossible for me to prove since Intel does not share costing data. Well, Wally Rhines does not give up as easily as I do and in his EDPS dinner keynote he presented compelling data contrary to Intel’s “projections”.


My beautiful wife travels with me now since our youngest child turned eighteen and I get her feedback on some of the events we attend. She absolutely loved the EDPS Workshop in Monterey since it was right on the beach and she was graciously invited to the keynote dinner. Wally’s presentation was not lost on her as she asked many questions afterwards. I then showed her my blogs on the subject and now she thinks I’m brilliant. Of course she also thinks I’m handsome so……

Wally’s presentation can be seen HERE. I really want you to take a careful look so we can have a constructive discussion in the comments section. Wally’s slides always tell a story so they are reasonably easy to follow. If you have a question I will make sure it gets answered by one of the attendees or Wally himself. This will probably be one of the most read blogs this month so I hope you can participate.

I was part of the EDPS Workshop organizing committee again this year and I’m happy to volunteer my time for this event. Even though I’m all about social media with SemiWiki I’m still old school and prefer to meet people face to face to make a personal connection which is why I drive to Silicon Valley 2-3 times a week for meet-ups and events.

Wally’s presentation was based on the “Learning Curve” and he presents supporting data for the semiconductor industry including the supply chain:


Hopefully someday Intel will explain this slide to us. There are 688 Intel employees registered on SemiWiki. Intel.com is one of the top referring domains. Hopefully people from Intel will participate in this conversation. Maybe someone from Intel’s “Free” Press? What do you say Intel? How about some transparency, openness, communication, and accountability?


International Workshop on Logic and Synthesis

International Workshop on Logic and Synthesis
by Paul McLellan on 04-20-2014 at 12:54 am

There are always a number of other events that are colocated with DAC. One this year is the 23rd International Workshop on Logic and Synthesis (IWLS) that is held the weekend before DAC on May 30th and June 1st. Strictly speaking it is not colocated since it is in the Galleria Park Hotel on Sutter Street a few blocks away whereas DAC itself is in Moscone Center.

No surprises for guessing what the conference is about. The IWLS is the premier forum for research in synthesis, optimization, and verification of integrated circuits and systems. Research on logic synthesis for emerging technologies and for novel computing platforms, such as nanoscale systems and biological systems, is also strongly encouraged. The workshop encourages early dissemination of ideas and results. The emphasis is on novelty and intellectual rigor. Only complete papers with original and previously unpublished material are accepted.

The call for papers has closed and they are in the midst of finalizing the program so it is not yet available, but the technical presentations will include 18 regular papers, keynotes and a special session, and will run from Friday May 30 afternoon till Sunday June 1 in the early afternoon. A social event is planned for Saturday May 31 evening. If you want to get a reasonable idea of what types of papers appear, then you can look at last years agenda (which was in Austin, obviously, since DAC was) here. Prior to that IWLS was no colocated with DAC.

You register for the conference on the DAC website registration page by doing the following:
[LIST=1]

  • Go hereand click on Register. This will direct you to the registration page.
  • Complete all the contact information and enter your membership status. Click Select Your Participation. This will bring you to the product choice page.
  • Open the Colocated Conferences tab, select the IWLSoption, and click on Checkout to proceed to checkout.

    Note that the rates go up by $100 on May 7th so do this in the next couple of weeks. The conference is sponsored by IEEE and ACM so members of those societies get a discount.


    More articles by Paul McLellan…