You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
I am right now working on the Hierarchical test implementations in SOCs. I have just started my research and already I have so many unanswered questions.
For now the main question is how a software compiles a RTL/Behavioral description to Gate level netlists? After that how a layout is generated?
I know the front end of the software, as in standard compilers available and synthesizing tools. What I am looking for is, how these EDA tools do that/back end of these tools?
I know it is a too generic question, and I am not looking for some proprietary/copyrighted information, but some generic algorithms which are used. Some academic research (freely available) which is used in commercial EDA tools.
I have done some search (google/IEEE explorer search) on my part and have come up with few papers dating 1990s. I hope somebody will have good resource links/documents, which can be shared.
A reference for open source synthesis software and some papers: http://www.eecs.berkeley.edu/~alanmi/abc/
For what happens afterwards you start by googling 'place and route'
Actually not a bad question - the process has not changed much in more than two decades, while the Silicon feature size has shrunk orders of magnitude, so it's probably time to ask if it still works. Most digital design is still done at RTL level as afar as I can tell, and that methodology dates from the late 80's and predates the use of multiple clock domains, multicore processing and power management.
The RTL process is basically an exercise in converting finite-state-machines (FSMs) at a high level of abstraction to a lower level (gates) within a set of design constraints for power, area and timing - with the additional constraint that the gates are usually taken from standard cell libraries. Usually this has been done as a two-step process where the synthesis tools guess at the effects of actually placing and routing (P&R), and the synthesis and P&R is inside a bigger loop of circuit extraction and verification to check if design constraints are actually met - i.e. "design closure". Synthesis itself usually sits inside a loop with static timing analysis (STA), where the synthesis tool takes a stab at some solutions and STA says whether to bother trying it for P&R. It's an NP problem.
The papers from the 1990s are probably a good enough reference for how synthesis works since I don't think the technology has changed much apart from being able to handle bigger circuits (and we have more computers to throw at the problem). Boolean satisfiability solvers can be used to help drive synthesis, and that technology probably gets more development than anything EDA related.
There is a key difference in Silicon as dimensions drop below 45nm and device-to-device variance becomes much higher, in that you probably want to migrate to asynchronous circuits which are more tolerant of the variance and do better with aggressive power-management, and while it is possible to do asynchronous synthesis from RTL, it's easier to do it the other way round and move to asynchronous FSM descriptions which can be synthesized to either synchronous or asynchronous logic gates. So far this has not happened because (IMO) the simulation tools are not up to the job, and there isn't much money to be made in fixing that - i.e. Spice simulation is too slow for verifying logic, and Verilog/VHDL can't handle the metastability issues properly in asynchronous circuits.