Banner 800x100 0810

Analysis of HLS Results Made Easier

Analysis of HLS Results Made Easier
by Randy Smith on 07-10-2013 at 4:30 pm

In a recent article I discussed how easy it was to debug SystemC source code as shown in a video published on YouTube by Forte Design Systems. I also commented on the usefulness of the well-produced Forte video series. Today, I am reviewing another video in that series on analyzing high-level synthesis (HLS) results.

Cynthesizer Workbench (CynthWB) is much more than a synthesis tool. It is a complete package that gives the user the ability to develop, debug, and analyze SystemC source code. This is in addition to its obvious function of generating RTL from a SystemC description. The workbench gives the user a coherent environment for performing many of the electronic system level (ESL) design tasks.

An important facet of ESL is to compare different possible design implementations in order to decide which style of implementation best meets the constraints for a specific intended use. Sometimes the function may be implemented for maximum speed and another time for lower area. More recently lower power is often the driving constraint. There are simple tables and graphics to show the results as a top level summary. For example, there is a histogram showing the total area of each run with the contributions to each area result from logic, registers, muxes, memory, and control. But much more detail is available.


CynthWB supports side-by-side comparison of the results of two different HLS runs under different conditions making it easy to see how the implementations were impacted by the constraints. The user can view side-by-side views of the parts used, the resulting RTL code, and much more. The video was quite interesting in showing the potential variations in synthesis results.

You can also split the panes in order to cross probe between relevant views of the same run. You can see things such as how a “+” (plus sign) in your SystemC source code maps to a particular adder in the parts list. Using cross probing you can see the relationship between a part in your parts list, where it is used in the RTL, or even where it came from in the original source code. Of course, a particular part may have been generated to implement different lines of source code, like multiple uses of the same adder. This type of bidirectional cross probing is quite useful in determining why certain components are needed which helps you to further optimize your design.


As in the previous Forte video I reviewed, the video is extremely well organized and produced. The total video is less than ten minutes and it is easy to understand the moderator. Of course you cannot learn everything in ten minutes and I imagine there are several advanced features available as well. Still, I recommend viewing the video to get a good idea of the design analysis environment supported by Cynthesizer Workbench. To see what other video are available from Forte click here. I will continue my review of the Forte video series again soon.

lang: en_US


A Goldmine of Tester Data

A Goldmine of Tester Data
by Beth Martin on 07-10-2013 at 2:06 pm

Yesterday at SEMICON West I attended an interesting talk about how to use the masses of die test data to improve silicon yield. The speaker was Dr. Martin Keim, from Mentor Graphics.


First of all, he pointed out that with advanced process nodes (45nm, 32nm, and 28nm), and new technologies like FinFETs, we get design-sensitive defects. This means that even when the design passes DFM/DRC, there are some design patterns that fail. The normal way to find these is through physical failure analysis (PFA to the cool kids); after the silicon is fabricated, wafer test finds defective parts, and the product engineers decide which of those parts to cut open and inspect. They are looking for the root cause of the failure to feed back into the design process. The decisions they make for PFA are based on test data. And there is a lot of it. However PFA alone can’t explain the root cause for layout pattern induced defects. The smaller feature sizes and new device structures of technologies like 14nm and FinFET introduce additional challenges.

The trick then, is to harness, sort, and present this failure data in a way that saves time and money and moves the whole world forward. It’s useful, Keim says, to correlate this physical fail data to the layout, and to DFM rules. That makes the results actionable. It’s then essential, he continued, to filter out the noise. With a lot of data, comes a lot of noise.

So, here’s what he showed suggests:

“Test fail data contains a goldmine of information.” This picture shows how the fail data is used to find the location of defect in the layout and what kind of defect it is. This makes more failed die suitable for PFA because they now know exactly where to look for the defect. It also uncovers systematic defects; information that feeds back to manufacturing and design in order to improve yield quickly.

Next, he explained how this same data can find critical design features.

Say PFA finds the root cause shown on the upper right. What does this tell you about the very similar patterns on the bottom? Do those also fail? You can get that answer. “It’s all in the data,” says Keim. You simply combine the output from layout-aware diagnosis (the suspect defects on top right) and the results of your DFM analysis. The DFM rule violation becomes a property of the diagnosis suspect. With statistical analysis, you can then determine the root cause pattern and, this is key, evaluate potential fixes. That last part is important because you can get immediate feedback on how to fix that defect before you go through another round of silicon. Keim stressed how important it is to be able to validate a proposed fix to the observed current defect. This validation will tell you if the proposed fix actually will have an impact on the current problem without screwing up things elsewhere.

He noted that for FinFETS, we need transistor-level diagnosis to find the potential defect mechanisms within the transistor itself. He says that their early research shows good correlation between where the diagnosis results say a defect is and where PFA actually finds it. To bring this to the full potential, effective transistor-level ATPG is a valuable asset, says Keim.

His final point about all the test data was about all the noise. Ambiguity isn’t helpful, he said definitively. To deal with the noise, Mentor has an algorithm they are calling “root cause deconvolution.”

Convoluted, adjective, meaning involved, intricate. A convoluted explanation leaves you more confused. A convoluted hallway leaves you lost. Convoluted test data leaves you with murky conclusions. Presumably, deconvolution (don’t look for it in Merriam-Webster), clarifies the root cause of the failure from the giant, smelly swamp of raw test data. This nifty technique eliminates the noise.

These powerful ways of using test data promises to accelerate the time it takes to find the root cause of chip failures that limit yield, and thus to improve yield ramp. Because product cycles are now shorter than process technology cycles, a fast yield ramp is a key component of success in the market.

For your viewing enjoyment, and for an in-depth look at yield improvement, Mentor offers this on-demand webinar “Accelerating Yield and Failure Analysis with Diagnosis.”


Best Practices for Using DRC, LVS and Parasitic Extraction – on YouTube

Best Practices for Using DRC, LVS and Parasitic Extraction – on YouTube
by Daniel Payne on 07-10-2013 at 1:21 pm

EDA companies produce a wealth of content to help IC engineers get the best out of their tools through several means:

  • Reference Manuals
  • User Guides
  • Tutorials
  • Workshops
  • Seminars
  • Training Classes
  • Phone Support
  • AE visits

Continue reading “Best Practices for Using DRC, LVS and Parasitic Extraction – on YouTube”


Towards the 0 DPM Test Goal

Towards the 0 DPM Test Goal
by Paul McLellan on 07-10-2013 at 10:43 am

At Semicon yesterday I attended Mentor’s presentation on improving test standards. Joe Sawicki was meant to present but he was unable to get a flight due to the ongoing disruption at SFO after last weekend’s crash. I just flew in myself and it is odd to see the carcase of that 777 just beside the runway we landed on.

The big challenges facing manufacturing test at present are:

  • achieving adequate test quality
  • minimizing test cost
  • dealing with increased design size and resultant test complexity
  • methodology and tools for testing 3D stacked die
  • ramping production and yield quickly


Over the last few process generations, changes in the manufacturing technology have had knock-on effects on test. First back at 180nm with damascene copper interconnect there was an increase in resistive “open” defect requiring improved test quality. Then at 45nm and below the aggressive RET decoration created pattern dependent defects requiring DFM and diagnosis to be more tightly combined. And now with FinFETs, we require further improvements in test quality and diagnosis.

Some safety critical applications such as automotive are starting to have 0 DPM (defects per million) as a quality requirement. This is obviously not appropriate for all designs: tripling the test cost for chips for cell-phones for example is the wrong trade-off. But another area requiring high test yield is for die going into any sort of 3D stack. A bad die getting through doesn’t just waste that die (and it was bad anyway), it wastes several good die and an expensive package.


The new processes, FinFET and FD-SOI are still too new to really understand how defects will behave in new cells. But people are anticipating increased in-cell defectivity since there is a dramatic reduction in the minimum feature size for FinFETs compared to planar devices.

In the past, most test coverage has been based on stuck-at fault models. This assumes that a test is good if it detects whether the output of a cell that is stuck at 0 or stuck at 1 causes the test to fail. Everyone knows that this is not really a realistic model of all ways that chips actually fail but it has been surprisingly robust and it is easy to measure.


But with more complex processes (and RET and more complex failure modes) this is not enough. The first thing to do is to look inside the cells and use cell-aware ATPG to look for undetected faults based on the actual cell layout and transistor netlist structure. One particular problem with FinFETs is strength related defects. Since transistors are a fixed size and multiple fins are used for higher drive, there is the risk that not all the transistors fail and so the drive will not be adequate but the circuit will still be functionally correct.

AMD did some work with Mentor on improving quality using cell-aware ATPG. Compared to just using the regular stuck-at model, they improved test quality at wafer sort by an incredible 885 DPM. This is a huge number. It was high enough to hugely reduce the need for system level functional test.

The next issue is to be able to drill down into systematic failures, where many more chips than expected fail on the same vector. The fix, for example, may be to redo the DFM decoration to improve yield, but that requires the capability to look for pattern-related failures and to isolate the problem not just to a net but to the correct segment of the net.


So in the new era there will be changes in test methodology:

  • need better detection of FinFET defect mechanisms (within the cells)
  • need further improvements in diagnosis resolution and root-cause determination

Obviously with its strong position in test, Mentor plans to deliver the required capabilities in the future.

LATE NEWS:
at 1pm today at Sematech Mentor’s TestKompress with cell-aware ATPG was awarded the Best of West award. The award recognizes important product and technology developments in the microelectronics supply chain. The Best of West finalists were selected based on their financial impact on the industry, engineering or scientific achievement, and/or societal impact.


Apple iOS7 Will Drive The Next Semiconductor Cycle

Apple iOS7 Will Drive The Next Semiconductor Cycle
by Ed McKernan on 07-08-2013 at 12:00 am

We have entered the Summer lull, the quiet period before what is likely to be Apple’s largest volume launch of smartphones ever. While the launches are coming in like clockwork, it is the magnitude of the ramp that is likely to surprise many and it will be due not just to the features of iOS7, but also to the adoption of a well-worn Microsoft strategy. Continue reading “Apple iOS7 Will Drive The Next Semiconductor Cycle”


Intel Versus the Fabless Semiconductor Ecosystem!

Intel Versus the Fabless Semiconductor Ecosystem!
by Daniel Nenni on 07-07-2013 at 7:30 pm

In case you missed it, there is an interesting conversation on my blog, The Future of Mobile Devices and the Semiconductor Landscape!, between Ashraf Eassa and myself. Ashraf and I also converse privately, he’s a very respectful and intelligent young man. For the last year Ashraf has been writing financial articles for Seeking Alpha which has earned him the reputation of an “Internationally Recognized Intel Fan”.

Ashraf did a nice Intel vs TSMC process comparison in the comment section, clearly he put a lot of time into it and for that I’m grateful. It also gives a telling look into how financial analysts think. Ashraf is a recent college graduate, if you are looking for an entry level analyst make him an offer, absolutely. Unfortunately, Ashraf, along with many other analysts, do not GET the fabless semiconductor ecosystem:

How will the fabless players compete at 20nm if they will not get the performance/power improvements to roughly match Intel’s 22nm FinFET process until 2015 when, presumably, Intel will be shipping its 2nd gen FinFET? Perhaps I’m missing something…

Let’s start at the beginning, the fundamental reason the fabless semiconductor ecosystem exists today is because traditional semiconductor companies did not do their job. They did not have vision, they did not innovate, and most importantly they did not efficiently manage their manufacturing capabilities. The first fabless companies leveraged excess semiconductor manufacturing capacity out of Japan innovating their way into new markets. Chip and Technologies was one of the first and they schooled Intel on how to build PC chipsets. Intel deployed the “if you can’t beat them buy them” strategy and the rest is history. Another early fabless innovator is Xilinx.

It is a familiar Silicon Valley story, Xilinx co-founder Ross Freeman wanted to create a blank semiconductor device that could be quickly programmed based on an application’s requirements. Even back then semiconductors cost millions of dollars to design and manufacture so this was not only a cost savings, FPGAs also dramatically reduced time to market for electronic products. Of course Ross’s employer Zilog did not share this vision and Xilinx was created in 1984. The founder of Zilog was an ex Intel employee who spun out to compete with his employer so he had vision. Unfortunately he had microprocessor blinders on and could not see the big picture. Sound familiar (Intel)?

Fast forward to today, the mobile market owns the semiconductor industry with SoCs driving process technology. The traditional semiconductor companies that once opened up their manufacturing services to the fabless companies are now fab-lite or totally fabless. Intel is also now renting out manufacturing capacity and looks to compete with the top fabless companies for mobile sockets with its Atom processor and superior manufacturing technology. Looks good on PowerPoint, as Ashraf has power pointed out, but the history of the semiconductor industry does not agree.


The mobile SoC market is fast moving and Intel is not. Qualcom and Apple alone ship more SoCs than Intel ships processors and at a much lower cost/margin. Qualcomm and Apple both license the ARM architecture and custom tailor their processors to the mobile products they control. Samsung is also licensing the ARM architecture for their line of SoCs. It’s not just a power/performance/price play, it also allows a much higher level of device integration. I don’t EVER see Intel becoming an IP company and licensing the Atom architecture.

The definition of SoC is System on Chip which means all of the semiconductor devices inside your phone will be on one chip at the lowest possible cost/margins. This will require mobile vision, flexibility, and a fast moving product plan. Does that really sound like Intel to you?

lang: en_US


4th Fundamental Circuit Element – Can it be a boon to Semiconductor Industry?

4th Fundamental Circuit Element – Can it be a boon to Semiconductor Industry?
by Pawan Fangaria on 07-07-2013 at 1:11 pm

It was a nice break after my vacation, indulging into some of the pure science stuff, when an old colleague of mine, Dr. Gaurav Gandhi, founder of mLabs in Delhi came across introducing me to his new research and possible developments in this field. Gaurav was actually in my product validation team while I was at Cadence, very passionate about getting deep into technology and doing things in newer ways.

I was delighted to know details about Memristor, about which I didn’t have full knowledge.

Gaurav and Varun Aggarwal have published a paper “Bipolar Electrical Switching in Metal-Metal Contacts” at Cornell University Library. One can also access it from their website here. Memristor was first discovered by Prof. Leon O. Chua (University of California, Berkeley) in 1971 as a non-linear passive two-terminal electrical component linking electric charge and magnetic flux. Later in 2008, Stan Williams, et al. from HP Labs discovered memristor which exhibits electrically controllable state-dependent resistance. HP Labs actually developed memristor using thin titanium dioxide film between two electrodes. More historical references can be found in Gaurav’s paper, but let’s see what his paper tells us and then ponder over a very near term opportunities and developments in this direction.

Gaurav actually discovered memristive properties in cat’s whisker (one of the first wireless radio detector) as well and identified the state variable governing the resistance state of the device which can be programmed to switch between multiple stable resistance states. He also emphasized on memristive properties being available in larger class of devices called coherers (including cat’s whisker), hence introducing canonical implementation of a memristor.

[Cat’s whisker setup]

In his experimentation, cat’s whisker setup was achieved through Galena crystal in contact with Phosphorous bronze wire, currents of varying amplitudes were passed through and voltages across the setup were recorded. He developed a programmable system in which input current waveform of desired amplitude and time period could be digitally generated and output voltage waveform recorded.

Results of two representative input waveforms – i) Rising triangular bursts of currents and ii) Fixed triangular waveforms in both positive and negative directions, show that the devices exhibit three distinct behaviours: i) cohering action, ii) multi-state memristive behaviour and iii) bi-stable resistive RAM (Random Access Memory) behaviour. Other than this setup, this phenomenon was observed in wide class of crystals like Carborundum, Iron Pyrite and metals including Iron filling, Aluminium, Ni etc. A coherer is comprised of imperfect metal-metal contact.

[Current and voltage Vs time and device behaviour as state-dependent resistance]

It can be clearly inferred that at a current higher than a threshold, average resistance of the device falls. Once the device takes this new state, it maintains its non-linear DC resistance even on excitation by a current value of less than that threshold, that’s an indication of cohering action.

It’s also evident that the maximum current during a time interval acts as a state variable, dictating the resistance in that time interval. As the device experiences higher peak currents, it sets itself to lower non-linear DC resistance values. This shows multi-state behaviour of the device.

[Alternate current and voltage Vs time and bi-stable memristive behaviour of device]

When activated by a bipolar current input, the device sets itself in one state in the positive cycle and in a different state in the negative cycle, thus oscillating between these two stable states, forming the famous eight-shaped pinched hysteresis loop in its V-I characteristic. That’s the bi-stable resistive RAM behaviour.

Opportunities To Semiconductor Industry – It’s clear that a piece of wire can be configured as a memory component which can be ultra optimized in terms of PPA (Power, Performance and Area), the most sought after criteria in semiconductor industry today. It can work great as storage devices. Considering its multi-state property, I can foresee that there could be more applications. Also, once commercially proven, I believe its manufacturing could be simpler and cost effective, if a ubiquitous and suitable material with these properties could be utilized.

As stated above, HP Labs had experimented with memristors. They, in collaborations with Hynix, are said to have plans to manufacture ReRAM (Resistive Random Access Memory) commercially which can replace Flash memory used in smart phones, tablets and MP3 players. Possibly, HP can unveil its baby by the end of 2014.

I am looking forward to more such applications of this simple but far reaching device. Comments and suggestions are welcome!!


Should an IP vendor become a PHY IP Dealer?

Should an IP vendor become a PHY IP Dealer?
by Eric Esteve on 07-07-2013 at 3:20 am

This is not a theoretical question. Imagine that you are developing and selling digital IP, like Interface Controller, PCI Express or USB 2.0 or 3.0, or MIPI Camera Serial Interface (CSI) or Display Serial Interface (DSI). If you look at companies like Synopsys, they have built their success on the “Integrated Interface IP” concept. In other words, they sell both the Controller and the PHY IP to support a specific protocol (USB, PCIe, MIPI etc…). You may think that you could much better developed your business by mimic competition, and be able to sell both the Controller and the PHY IP. The first problem you are facing is finding right resources: if you can easily increase your digital development design team, it’s much more difficult to find, then attract and hire experienced Analog designers (I mean those able to design a 6 GHz PLL or a 10 Gbps SerDes). Thus, it looks easier and faster to simply resale PHY IP developed by another party…and gets a higher share of this fast growing market, Interface IP:

After some research, you have exercised your network, and find, for example an IDM ready to run a partnership with you: he brings the Silicon validated PHY IP, you bring your support team and sale network and market these IP. No doubt that you will find such an IDM, as some of them have to increase their revenue to compensate some sale weaknesses. Let’s try to draw the complete marketing figure, and see the pros, and also the drawbacks linked with such a case.

The first positive point, I should say very positive, is the fact that this very complex IP (say a 5 Gbps SerDes for example) has been not only Silicon proven, but also Production and system proven. This is a very strong sale argument, and you will certainly use it. Nevertheless, don’t forget that this argument will stay valid only in the same technology case: same technology node, same foundry. As soon as you would need to port the IP, either in another technology node from the same Silicon foundry, or to another foundry similar technology node, you will have to modify the design, thus opening the door to any potential mistake… By the way, who in your team will take care of this porting? In fact, you will have to hire a few of these analog designers. From a past experience, being ASIC program manager for a SoC including several complexes analog functions, these functions being responsible for a 18 month additional delay, leading to successive Silicon ending to a “K” version (think A is the first, then B and so on), my personal advice is to hire the best you can find!

Another drawback comes to my mind: if you resale these IP from this IDM, there is a pretty high probability that other IP vendors will deal these same IP, coming from the same source. Then, you face another issue, which is the lack of differentiation. You become the Nth IP vendor selling the exactly same product. Some old marketing training reminds me that, in this case, you will probably have to use price as the unique possible differentiator. Price differentiation is the worst situation when selling a complex function! Such a differentiator works well when selling commodities like TTL or DRAM. The only way to escape this situation is to rely on an efficient and talented analog design team, so you can redesign the function for lower power, smaller area or better performance. But, in this case, why not design your own complex IP, so you have the complete knowledge of the function, being in the position to give the best possible technical support, or modify the IP upon customer request (and finally creating much more value) ? And last but not least business related argument, your sales team will market a “clean” IP, from a legal standpoint, as you completely own the IP rights… Just make your decision and choose to be a PHY IP dealer, or if it’s better to develop and license your own IP.

By Eric Esteve from IPnest

lang: en_US


Calypto 2013 Report

Calypto 2013 Report
by Paul McLellan on 07-05-2013 at 5:48 am

Each year Calypto runs a survey of end-users. This year’s survey and report has two parts, power reduction and high level synthesis (HLS).

The topics covered are:

  • survey methodology and demographics
  • top methods used to reduce power
  • engineering time spent on specfiic RTL tasks to reduce power
  • plans to deploy RTL power reduction tools in 2013
  • methods to verify HLS output RTL
  • important technologies to implement with HLS
  • summary


The above pie-chart shows the detailed breakdown. The primary job functions specified were engineering and CAD management (23%), system designer (14%), verification engineer – hardware (12%), and hardware synthesis engineers (11%).

I’m not going to go into the whole report here, you can download it and read the whole thing. But I’ll pick out a couple of interesting charts.


The first one shows the popularity of the main means of power reduction. This is at the RTL level so some techniques for power reduction are applicable only at other stages in the design process.

  • Top is sequential power gating (such as is done by PowerPro and its competitors).
  • Power gating, which is powering down blocks that are completely unused, such as the transmit/receive logic in a cellphone when there is no call taking place
  • Third is combinational clock gating. I think this is so universal that people don’t always realize they are using it since synthesis does it semi-automatically, so I’m surprised it isn’t #1. It usually is in this sort of survey.
  • Next is dynamic voltage and frequency scaling (DVFS) which is surprisingly high to me. It is a difficult technique to use requiring careful tuning of how fast and in what order the frequencies and voltages are changed. It is used in high end microprocessors but I’ve not really heard of it in other parts of big SoCs
  • Next is multi V[SUB]th[/SUB], again a relatively low score if it covers using multi V[SUB]th[/SUB] libraries. RTL synthesis does this automatically so many people may be unaware they are doing this even when they are.
  • Resource sharing, memory gating and data gating bring up the rear.


Interestingly, the chart above shows which techniques are used to verifying high level synthesis RTL against its original input doesn’t have a lot of people using sequential formal verification, which I find a bit surprising. FPGA prototyping seems a poor substitute for formal techniques. After all, for RTL synthesis we pretty much use formal techniques rather than simulation these days. Anyway, since Calypto has the only sequential formal verification tool on the market today, that is a potentially high growth opportunity.

So in summary, SoC, IC, and FPGA design professionals completed a blind, anonymous survey online.

  • The average time spent on various RTL power tasks was: 20 percent on automated optimization, 20 percent on guided/manual power reduction, and 17 percent RTL power analysis.
  • The combined total organizational involvement with using, implementing, or evaluating RTL power reduction technology is close to two-thirds. Almost a quarter of respondents said their organization had already implemented RTL power reduction tools.
  • More than three-quarters of respondents used HLS. The top method for verifying the RTL output was FPGA prototyping. There was a range of usage among directed tests, assertion-based, constrained random, hardware emulation, and C-to-RTL formal.


GSA Awards Deadline Looming + GSA Entrepreneurship Conference

GSA Awards Deadline Looming + GSA Entrepreneurship Conference
by Paul McLellan on 07-05-2013 at 5:09 am

GSA has award for various categories that are presented at their annual awards dinner. This year’s dinner will be on Thursday December 12th at the Santa Clara Convention Center.

Some of the awards have now passed their cutoff date. But a few remain open until July 12th (hurry, just one more week):

  • Startup to Watch Award
  • Most Respected Private Semiconductor Company Awards
  • Outstanding Asia-Pacific (APAC) Semiconductor Company Award
  • Outstanding Europe, Middle-East, Africa (EMEA) Semiconductor Company Award

There are also several awards for public companies:

  • Best Financially Managed Semiconductor Company
  • Most Respected Public Semiconductor Award (>$1B sales)
  • Most Respected Public Semiconductor Award ($251M to $1B sales)
  • Most Respected Public Semiconductor Award ($100M to $250M sales)
  • Analyst Favorite Semiconductor Company Award

Public companies do not need to be nominated since they are assessed based on public financial data. But private companies and individuals do need to be nominated on one of these forms.

If you want to attend the dinner (either as an individual or as a company purchasing a whole table) then the details are here. The dinner is made possible by sponsor TSMC as well as some other general sponsors. December 12th at 7pm, Santa Clara Convention Center.

The keynote speaker at the dinner is Cory Booker, the mayor of Newark NJ.The Honorable Cory A. Booker is the Mayor of Newark, New Jersey. He took the oath of office as Mayor of New Jersey’s largest city in July 2006 following a sweeping electoral victory, and was re-elected for a second term in another overwhelming majority in 2010. Mayor Booker brings his passion for social change to the podium. Sharing stories informed by real life, Mayor Booker demonstrates the need for everyone to take responsibility to help this nation live up to its promise. Mayor Booker also sheds light on the necessary reform government needs to undergo to become equipped to deal with the challenges of modern times. Drawing from a deep belief in service and social justice, Mayor Booker inspires audiences to greater civic responsibility and issues a call to action for America.

Full details of all the awards, including the small print (“must be a semiconductor company” etc) are on the GSA website here. This is also where you can find links to the nomination forms.

Also, don’t forget the annual GSA Entrepreneurship Conference is at the Computer History Museum on July 18th. It is from 3.30pm to 8.30pm (note the slightly unusual time). Details are here. There is no charge but you must register. The full program, including details of the panels and the speakers is here.