SMT webinar banner3

I/O Bandwidth with Tensilica Cores

I/O Bandwidth with Tensilica Cores
by Paul McLellan on 08-17-2012 at 3:00 pm

It is obviously a truism that somewhere in an SoC there is something limiting a further increase in performance. One area where this is especially noticeable is when a Tensilica core is used to create a highly optimized processor for some purpose. The core performance may be boosted by a factor of 10 or even as much as 100. Once the core itself is no longer the limiting factor, I/O bandwidth to get data to and from the core often comes to the head of the line. Traditional bus-centric design just cannot handle the resulting increase in data traffic.


A long time ago processors had a single bus for everything. Modern processors separate that so that they have one or more local buses to access ROM and RAM and perhaps other memories, leaving a common bus to access peripherals. But that shared bus to access the peripherals becomes the bottleneck if the processor performance is high.

Tensilica’s Xtensa processors can have direct port I/O and FIFO queue interfaces to offload overused buses. There can be up to 1024 ports and each can have up to 1024 signals, boosting I/O bandwidth by thousands of times relative to a few conventional 32 or 64 bit buses.


But wait, there’s more. Since Tensilica’s flexible length instruction extension (FLIX) allows designers to add separate parallel execution units to handle concurrent computational tasks. Each user-defined execution unit can have its own direct I/O without affecting the bandwidth available to other parts of the processor.


While plain I/O ports are ideal for fast transfer of control and status information, Xtensa also allows designers to add FIFO-like queues. This allows the transfer of data between the processor and other parts of the system that may be producing or consuming data at different speeds. To the programmer these look just like traditional processor registers but without the bandwidth limitations of shared memory buses. Queues can sustain data rates as high as one transfer per clock cycle or 350Gb/s for each queue. Custom instructions can perform multiple queue operations per cycle so even this is not the cap on overall bandwidth from the processor core. This allows Xtensa processors to be used not just for computationally intensive tasks but for applications with extreme data rates.

It is no good adding powerful capabilities if they are too hard to use. I/O ports are declared with simple one-line declarations (or a check-box configuration option). A check-box configuration is also used to define a basic queue interface although a handful of commands can be used to create a special function queue.

Ports and queues are automatically added to the processor and, of course, are completely modeled by the Xtensa processor generator, reflected in the custom software development tools, instruction set simulator (ISS), bus functional model and EDA scripts.

A white paper with more details is here.



What’s Next For Emerging Memories

What’s Next For Emerging Memories
by Ed McKernan on 08-17-2012 at 11:00 am

In doing some digging in preparation for the start of www.ReRAM-Forum.com Christie Marrian asks if ReRAM.CBRAM technology is approaching a ‘tipping point’ relative to NAND Flash. You can read more of his analysis over at the blog he moderates (ReRAM-Forum.com). Also a note to readers. The blog is interested in collecting new posts from engineers and developers working with today’s memory and emerging memory technologies. Drop Christie a note on your analysis or if you have written a paper on emerging memories, the site welcomes original research work.


2012 semiconductor market decline likely

2012 semiconductor market decline likely
by Bill Jewell on 08-16-2012 at 9:00 pm

The worldwide semiconductor market in 2Q 2012 was $73.1 billion, according to WSTS data released by the SIA. 2Q 2012 was up 4.7% from 1Q 2012 but down 2.0% from 2Q 2011. Major semiconductor companies are generally expecting slower revenue growth in 3Q 2012 versus 2Q 2012. The table below shows revenue estimates for calendar 3Q 2012 for the largest semiconductor suppliers which provided guidance. TSMC, the largest wafer foundry company, is included since its business is a key indicator of the outlook for many fabless companies.

TSMC, Texas Instruments, Qualcomm, STMicroelectronics and AMD all predicted revenue declines for the low end of their 3Q 2012 guidance. The midpoints of guidance ranged from -1% to +5.9%. The high end of guidance was over 9% for Intel and Broadcom, but below 6% for the other companies. Renesas was an exception, forecasting 17.6% growth in 3Q 2012 after an 11% decline in 2Q 2012.

The major memory suppliers – Samsung, SK Hynix and Micron Technology – did not provide specific revenue guidance for 3Q 2012 but expressed similar outlooks: a weak DRAM market and a steady to improving flash memory market. Given the lackluster guidance by major semiconductor companies, the 3Q 2012 semiconductor market will likely show slower growth than the 4.7% in 2Q 2012. This slow growth will likely continue into 4Q 2012. TSMC indicated it expects a decline in revenue in 4Q 2012 from 3Q which could be as severe as double-digit.

With semiconductor market growth sluggish in the second half of 2012, it appears the full year 2012 will show a decline from 2011. We at Semiconductor Intelligence believe our February 2012 forecast of a 1% decline for 2012 was the first forecast from an analyst firm to predict a decline. We revised the forecast up to 2% in May, based on signs at the time of improvement in both the worldwide economy and in electronics markets. We have returned to the 1% decline in our latest forecast.

Most analyst firms expect 2012 semiconductor market growth in the 4% to 7% range. WSTS’s May forecast was for only 0.4% growth. The Carnegie Group in July forecast a flat market. The Information Network in August predicted a decline in 2012, but did not state a specific number. Mike Cowan’s forecast model based on historic WSTS data is updated each month. Cowan’s 2012 forecast first went negative in March, turned slightly positive in June and July, and went negative again in August at -0.9%.

The semiconductor market in the last twelve years has shown years of growth over 30% and declines as high as 32%. From this perspective, the difference between a low single-digit decline and a low single-digit increase in 2012 does not appear meaningful. However it is important from a psychological standpoint. The semiconductor industry does not want to see a decline in 2012, especially after growth of only 0.4% in 2011. Semiconductor companies would like to show positive revenue growth to their shareholders in 2012, even if very slight, rather than a decline. Unfortunately a decline is becoming more likely. The major economic concerns – the European debt crisis and weak U.S. recovery – are not likely to be resolved before the end of 2012. Two key drivers of the semiconductor market are showing no growth. IDC estimates PC shipments in 2Q 2012 were flat versus a year ago. IDC also said mobile phone shipments were up only 1% from a year ago in 2Q 2012 after a 1.5% decline in 1Q 2012.

Semiconductor Intelligence


The Generational Legacy of Steve Jobs

The Generational Legacy of Steve Jobs
by Ed McKernan on 08-16-2012 at 12:00 pm

Truly great leaders are recognized by the impact they leave several generations down the road. Roosevelt and Churchill are two historical figures who together saved Western Civilization, thus leaving a tremendous legacy even now, two generations later. In the semiconductor world we mark our generations in the two-year cadence of Moore’s Law. When Steve Jobs passed away it was commented in Walter Isaacson’s book that he left Apple a 4-year product development pipeline. Surely this is significant with regards to Apple’s future viability but I am beginning to believe that he put in place an IP and Branding Strategy whose legacy will last a generation, which is atypical for technology companies. Perhaps only IBM can claim that. The reasoning behind this post is the current courtroom battle between Apple and Samsung. Steve Jobs used the word “Thermonuclear” to describe how he would destroy Android an I am beginning to believe his intention was beyond an IP fight and more in a public humiliation of the Android Cloners. Samsung is being forced to go through what I would call a “Branding Perp Walk.”

Steve Jobs commented that at the time he left Apple in 1985, they had a 10-year technology lead. Given that he had spent 10 years building Apple, one could conclude that one of his work years was equivalent to two work years developing a PC at IBM or O/S for Microsoft. History though has proven him out, as Microsoft was not able to match the 1984 Macintosh until Win95 was launched. There is no doubt that the Mac was way ahead of its time as the software overwhelmed the processor, graphics and DRAM hardware. It would take several Moore’s Law generations for silicon performance to improve and costs to drop to a reasonable range to support the mass markets with GUI based PCs. However when it arrived, the cloners, led by Dell feasted off the higher margined IBM, Compaq and Apple products resulting in brands reduced to differentiated by price.

Fast-forward a dozen years to 2007 and the introduction of the iPhone leads to the most revolutionary computing platform since the 1984 Macintosh. Steve Jobs knows it will be copied and unlike the John Sculley era, Apple will need to vigorously defend its IP and its Brand or fall into the Dell trap. Eric Schmidt, taking the role of Bill Gates begins executing the software commoditization strategy with the Free Android O/S. The cloners are set in motion to the point that in the case of Samsung, everything down to the product packaging is replicated. Apple needs to put a halt to the rapidly expanding Android ecosystem and destroy not only Google but the cloners. If the whole world can see that Apple’s competitors are proving to be nothing more than fake, knock offs then their brand can be severely damaged, which severely breaks a businesses operation model. How many individuals will want to show off their new Android Smartphone to friends at a dinner party after the supplier have been slapped down in the world of customer opinion.

To convince a future judge and jury and the world over that Apple plays fair, Steve Jobs laid a honey trap that Samsung fell into. Per court testimony we find that Apple was willing to offer a license on their patents to Samsung at the rate of $30 per smartphone and $40 per tablet. If Samsung took the license, then the impact on profits would be so great that they would not be able to compete with Apple. By denying the license and by not negotiating in good faith, Samsung took the risk that they would be shown to not have respect for an innovator’s property and thus are being held up as an example of improper business practices. It is unclear what the settlement will be at this time in terms of royalty payments or penalties. The more relevant point is that Apple will walk away with an enhanced brand while Samsung and other Smartphone cloners will be living with Brands that are significantly diminished. This is Apple’s way of preserving their brand and profit streams as they pull their customers ever tighter into the iCloud ecosystem. In a few years, I think the dominance and legacy will make it too difficult for users to leave the Apple Cloud and a new competitor to assault the castle walls.

Soon the trial in California will end and a settlement will come forth but then the next set of trials will begin in both the US and Europe. Do Apple competitors utilizing Android want to continue fighting this “Branding Death March” with a company that has the deepest pockets in the industry or maybe they migrate over to Microsoft’s O/S. In the end it was not just the 4-year product pipeline that is the legacy of Steve Jobs but in addition the IP and Branding strategy that will extend Apple’s dominance out beyond a generation, which is a time span that mark true greatness.

Full disclosure: I am long INTC,QCOM,AAPL,ALTR


SystemVerilog from Nevada?

SystemVerilog from Nevada?
by Daniel Payne on 08-16-2012 at 10:58 am

When I think of EDA companies the first geography that comes to mind is Silicon Valley because of the rich history of semiconductor design and fabrication, being close to your customers always makes sense. In the information era it shouldn’t matter so much where you develop EDA tools, so there has been a gradual shift to a wider geography. Aldec is one of those early EDA companies that started in 1984, just three years after Mentor opened it’s doors, however Aldec is headquarteredin Nevada instead of Silicon Valley. I wanted to learn more about Aldec tools and decided to watch their recorded webinar on System Verilog.

The first time that I used Aldec tools was back in 2007 when Lattice Semiconductor replaced Mentor’s ModelSim with the Aldec Active-HDL simulator. I updated a Verilog training class and used Active-HDL for my lecture and labs delivered to a group of AEs at Lattice in Oregon. Having used ModelSim before it was actually quite easy for me to learn and use Active-HDL. For larger designs you would use the Aldec tool called Riviera-PRO.

Webinar

Jerry Kaczynski presented the webinar, he’s a research engineer at Aldec, and has been with the company since 1995. His background includes working on simulator standards. With 53 slides in just 65 minutes the pace of the webinar is brisk, and filled with technical examples, no marketing fluff here.


SystemVerilog came about because Verilog ran out of steam in the verification side. Accellera sponsored SystemVerilog and the first standard to extend Verilog in 2005, then by 2009 Verilog and SystemVerilog became merged. SystemVerilog has various audiences:

  • SystemVerilog for Design (SVD) – for hardware designers
  • SystemVerilog Assertions (SVA) – both design and verification
  • SystemVerilog Testbench (SVTB) – mostly verification
  • SystemVerilog Application Programming Interface (SV-API) – CAD integrators

SVD
Verilog designers get new features in SystemVerilog like:

  • Rich literals: a= ‘1; small_array='{1,2,3,42};
  • User-defined data types
  • Enumeration types (useful in state machines)
  • Logic types (can replace wire and reg)
  • Two-value types (bit, int) – simulates faster than 4 state
  • New operators (+=, -=, *=, /=, %=, &=, |=, <>=)
  • Hardware blocks (always_comb, always_latch, always_ff)
  • Implicit .name connections for modules, also implicit .* connections in port list
  • Module time (timeprecision, timeunit)
  • Conditional statements (unique case, priority keyword – replaces parallel case and full case pragmas)
  • New do/while Loop statement
  • New break and continue controls

  • Simpler syntax for Tasks and Functions
  • New procedural block called final
  • Aggregate Data Types (Structures, Unions, Arrays – Packed, Unpacked)
  • Structures added (like the record in VHDL or C struct)
  • Unions added
  • Array syntax simplified

  • Special unpacked arrays (Dynamic, Associative, Queues) – not synthesizable
  • Packages – organize your code better using import

SVA
Assertions are used in property based design and verification, and they look at the design from a functionality viewpoint.

  • Look for illegal behavior
  • Assumptions on inputs
  • Good behavior, coverage goals

  • HW designers add assertions in code to document and verify desired behavior
  • System level designers can add protocol checkers at top level
  • Verification engineers can add verification modules bound to an object to monitor behavior

SV Interfaces
For communicating between modules SV Interfaces bring new abilities and less typing:

SV Testbench

  • Class is used for OOP
  • Inheritance – reuse previous classes
  • Polymorphism – same name do different things depending on class
  • Abstract classes – higher level
  • Constrained random testing (CRT)
  • Spawn threads

  • Mailbox (type of Class) – FIFO for message queue
  • Functional Coverage – coverage analysis (covergroups, coverpoints, bins)

Verification Methodologies

  • Verification Methodology Manual (VMM) – created by Synopsys, both testbench and design as SystemVerilog
  • Open Verification Methodology (OVM) – created by Mentor and Cadence, has SV and SystemC testbench with design files in any language
  • Universal Verification Methodology (UVM) – created by Accellera to unify VMM and OVM
  • Teal/Truss – by Trusster as Open Source HW verification utility and framework in C++ and SV

Q&A
Q: What tools support SVA?
A: SVA is included in Riviera-PRO simulator.

Q: How could I use SystemVerilog in my VHDL testbench?
A: You could bind SystemVerilog as checkers, then connect them to entities or components in VHDL.

Q: What is difference between logic and reg?
A: Logic is more than reg, also used where wire was used.

Q: Can I connect VHDL inside of SystemVerilog?
A: That’s not controlled by a standards body, so it’s tool specific.

Q: Can I synthesize a queue?
A: No, not really today.

Q: How are modports related to assertions?
A: Not directly related, modports used to define directions of interconnect.

Q: Can we execute random in modules?
A: Random is used for classes.

Q: Will associative arrays handle multi-dimensions?
A: not yet.

Q: Good SV books?
A: Depends on if you do design or verification. Many good choices. Design subset – Sutherland’s book. Browse Amazon.com.

Q: Constrained random test generation details?
A: Just an overview today, sorry.

Summary
SystemVerilog gives the designer richer ways to express hardware than Verilog, more clearly defined intent, better verification with assertions, and use fewer lines of code. It’s about time to upgrade from classic Verilog to SystemVerilog in order to reap the benefits. VHDL designers may benefit from using SV for verification.


40 Billion Smaller Things On The Clock

40 Billion Smaller Things On The Clock
by Don Dingee on 08-15-2012 at 8:00 pm

Big processors get all the love, it seems. It’s natural, since they are highly complex beasts and need a lot of care and feeding in the EDA and fab cycle. But the law of large numbers is starting to shift energy in the direction of optimizing microcontrollers.

I mulled the math in my head for a while. In a world with 7 billion people and a projected 50 billion “connected devices”, there are conservatively speaking at least 40 billion smaller things with powerful microcontrollers inside. That’s not counting the small package, jelly bean MCU parts inside a toaster. I’m talking about 32-bit MCUs powerful enough to drive a networking stack, display, and user interface. Billions and billions, as Carl Sagan used to say.

The same art that has gone into designing high-end microprocessors will turn into designing this new breed of microcontroller, with one big difference: power consumption will rule designs, from beginning to end. The microcontroller world has gotten away predominantly with 99% sleep (something I’ve recently seen referred to as “near death” mode, depressing) and relatively low clock rates as the way to conserve power, but that’s going to change as the expectations for connectivity and performance in these new connected devices shift.

Microcontroller and SoC designs turned to massive clock gating a generation ago as a power management technique, dynamically shutting down logic paths not in use at a particular moment. Clock gating on this scale has been a highly manual art, well worth the investment in a large part. (See the discussion on P. A. Semi in my post on the Apple A5 SoC family.)

A little more than a year ago, Cadence quietly purchased Azuro, proponents of clock concurrent optimization. CCOpt does timing-driven placement, logic re-sizing, and clock gating in a single step, rather than leaving the clock gating to man-months of post-design hand optimization, or considering clock gating separately from timing considerations. They’ve integrated that capability into their Encounter Digital Implementation System 11.1.

Broadcom was one of the first companies to grab the CCOpt capability, but they have looked at it from a performance and timing closure perspective, and as a way to increase EDA design throughput by reducing cycle time. It’s a good first step, and they admit one goal is more performance for the same watts.

When the world’s largest MCU company, Renesas, grabs CCOpt and starts using it, they find something quite interesting as they try to reduce MCU power. Their take is the clock network itself consumes 1/3 of the overall MCU power, even on a relatively pedestrian 160MHz part. By using CCOpt, Renesas teams pulled out a 30% reduction in MCU clock power – that’s around 10% of the overall chip power just by optimizing the clock network.

That doesn’t sound like much, but consider there are cars with upwards of 100 MCUs inside, and many of them are always on managing safety, performance, and environmental systems. Renesas shares their outlook for MCUs in cars, and what power consumption means to them.

Automotive is just one area where advanced MCUs will make an impact. Reducing MCU power as 40 billion devices are more and more in the “on” state will draw increasing amounts of EDA attention in the next few years. We’ll see more love flow from the clock gating and optimization practices for big processors down to MCUs soon.


What’s Inside Your Phone?

What’s Inside Your Phone?
by Daniel Nenni on 08-14-2012 at 7:35 pm

Now that the mobile market is keeping us all employed, take a close look at what is actually inside those devices we can’t live without. Before SoCs you could just read the codes on the chips. Now it is all Semiconductor IP so you have to do a little more diligence to find out what is really powering your phones and tablets. One thing you can be sure is that there are multiple DSP cores doing a variety of tasks and there is a 70% chance they are from CEVA.

CEVA is the world’s leading licensor of DSP cores and platform solutions for themobile,digital home andnetworking markets. For more than twenty years, CEVA has been licensing a portfolio of DSPs, platforms and software to leading semiconductor vendors and original equipment manufacturer (OEM) companies worldwide. CEVA’s IP portfolio includes comprehensive technologies forcellular baseband (2G / 3G / 4G), multimedia,HD video,HD audio,Voice over IP (VoIP),Bluetooth,Serial Attached SCSI (SAS) andSerial ATA (SATA).

CEVA’s technologies are deployed in hundreds of millions of smartphones and handsets every year, and currently power one in every three handsets shipped worldwide. From cellular baseband processing, to audio, voice, multimedia and Bluetooth, CEVA’s broad portfolio of low-power DSP cores and platform IP are ideally suited to wireless handsets applications.

CEVA even has a very nice Wikipedia page:

CEVA was created through the combination of the DSP IP licensing division ofDSP Group (NASDAQ:DSPG) and Parthus Technologies plc in November 2002.[SUP][2][/SUP]The company develops advanced technologies for multimedia and wireless communications chips. CEVA is the world’s #1 DSP architecture deployed in cellular baseband processors[SUP][3][/SUP]In 2011, CEVA reported revenues of $60.2 million and its technology was used in more than 1 billion cellular and electronic entertainment devices. CEVA may be only Israeli company involved in the production of the iPhone.[SUP][4][/SUP]

Combined shipments of smartphones and tablets are expected to grow more than 40% in 2012. Single core devices will become duel core, duel core devices will become quad core, speeds will double again. To date, more than 3 billion CEVA-powered chips have been shipped worldwide. In 2011 alone, CEVA licensees shipped more than 1 billion CEVA-powered products. Recent industry data from The Linley Group reported CEVA’s share of the DSP IP market at 70%.

With more than 200 licensees and 300 licensing agreements signed to date, CEVA’s comprehensive customer base includes many of the world’s leading semiconductor and consumer electronics companies. Broadcom, Icom, Intel, Intersil, Marvell, Mediatek, Mindspeed, MStar, NEC, NXP, PMC-Sierra, Renesas, Samsung, Sharp, Solomon Systech, Sony, Sequans, Spreadtrum, ST-Ericsson, Sunplus, Toshiba, VIA Telecom and Xincomm all leverage CEVA’s industry-leading DSP cores and IP solutions. These companies incorporate CEVA IP into application-specific integrated circuits (“ASICs”) and application-specific standard products (“ASSPs”) that they manufacture, market and sell to consumer electronics companies.

The semiconductor IP business model has evolved into quite a profitable one. The CEVA business model consists of three components; upfront license fees; royalty revenue from every chip sold by customers incorporating CEVA IP, and; revenues from related customer support, development tools and maintenance. CEVA’s 2012 second quarter was the strongest licensing quarter in 3+ years driven by 20+ LTE design wins. Check out the CEVA gallery of products HERE. Impressive!

A Brief History of Semiconductor IP


Chip-Package-System Solution Center

Chip-Package-System Solution Center
by Paul McLellan on 08-14-2012 at 5:48 pm

One of the really big changes about chip design is the way over the last decade or so it is no longer possible to design an SoC, a package for it to go in and the board for the package using different sets of tools and methodologies and then finally bond out the chip and solder it onto the board. The three systems, Chip-Package-System have become so interrelated that everything needs to be concurrently designed. The situation only gets worse once thru-silicon-vias and interposer and true 3D designs are considered.

There are a large number of different issues that come together. The most obvious is various aspects of power, primarily getting the power into the package and distributed across the chip and then getting the heat out again, while also accounting for how the variation in temperature will affect performance. Increasingly, in a modern SoC, everything affects everything else. The temperature goes up which affects the performance which affects the power which affects the temperature.

The leader in having technology for handling this has been Apache. In fact the growth of importance of CPS and the need to adopt a much more formal analysis approach has been one of the drivers of Apache’s revenues and, clearly, one reason that Ansys acquired them.

Apache has a lot of technology in this area and historically it has required trawling their website to pull together everything that you might need. And maybe even going to the Ansys site too. But now everything is in one place the ANSYS/Apache CPS subweb:

  • CPS methodology
  • CPS education
  • CPS new & information
  • CPS blogs
  • CPS user group

If you are interested (and you pretty much have to be if you are doing any of this stuff) in CPS then this is a great resource center. As a starting point, if you have not already read it there is a white paper Chip-Package-System Convergence: ANSYS and Apache Technologies for an Integrated Methodologythat is a pretty good overview of the area.


Ajoy Bose and Hogan: SoC Realization

Ajoy Bose and Hogan: SoC Realization
by Paul McLellan on 08-13-2012 at 6:47 pm

Tomorrow night in Sunnyvale at the National Institute of Technology Alumni meeting, Ajoy Bose and Jim Hogan will talk about different aspects of SoC Realization. I’ve been saying for some time that design is changing and the block level is really where the action is. That is the right level to put together a virtual platform so that the software can be developed, and it is increasing the level at which chips are put together. Increasingly a chip is an assembly of blocks of IP and that assembly process is known as SoC Realization.

Ajoy Bose will talk about how SoC Realization is where high-level concepts are refined to implementation readiness. Legacy and third-party IP blocks are chosen and integrated and the overall chip is prepared for back-end implementation. Getting it right at this stage will dramatically reduce implementation challenges and iteration times. “getting it right” during SoC Realization could lead to the creation of new markets and renewed growth for the industry, but first completing the flow and implementing the vision will require the collaboration of many.

Jim Hogan will talk about Making Money from SoC Realization. Jim wants to focus his EDA investments over the next 5 years in SoC Realization. Part of SoC Realization is validating IP quality and functionality, design assembly and IP integration, design data management, power, performance, and area feasibility checks, debug and analysis tools, memory and memory controllers and bare metal software development.

Increasingly chips are designed using IP and software and the methodologies need to change to adapt to this. There is still often an unspoken assumption that chips are designed by creating a lot of RTL and then running it through synthesis, place and route etc but the reality is that most of the design is re-use of existing IP and software customization.

Look at something like Apple’s A5 chip. Most of the area is a dual ARM core and a quad-core Imagination GPU. Yes, I’m sure there is a little Apple secret sauce in there but primarily large IP blocks being hooked together.

Details on the meeting are here. I’m pretty sure you don’t have to be an alumnus/a from NIT India to attend.


Interview with Brien Anderson, CAD Engineer

Interview with Brien Anderson, CAD Engineer
by Daniel Payne on 08-13-2012 at 11:15 am

I first met Brien Anderson on LinkedIn because we share common groups and interests, so I decided to interview him and discover how CAD tools enabled IC design at Synpatics, a company with capacitive sensing technology used in smart phones, tablets and touch screens.

Continue reading “Interview with Brien Anderson, CAD Engineer”