Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/systemverilog-slaphappy.1063/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

SystemVerilog, slaphappy

simguru

Member
So apparently everyone loved the dysfunctional "interconnect" and "nettype" proposals -

[sv-dc] Mantis items approved

You can find the work record here -

0003398: User defined nets and resolution functions - EDA.org Mantis
0003724: Allow generic interconnect for "typeless" connections - EDA.org Mantis

I would be interested in an anyone else's opinion on whether 3398 actually meets its own requirements, I don't think it actually handles X & Z properly (and strength has been ignored). Certainly the fact that an undriven net ascribed user defined types goes to X rather than Z gives pause.

It also seems unlikely that anyone will retool their design flows to use "interconnect" instead of "wire", so I expect that part will be deprecated before it ever becomes mainstream.


If anyone would like to help me build an open-source SV system so that we can actually get this stuff working properly let me know (price very negotiable), I can integrate AMS at the same time.

<script src="http://platform.linkedin.com/in.js" type="text/javascript"></script>
<script type="IN/Share" data-counter="right"></script>
 
Last edited by a moderator:
Open Source verification project is a great idea. Starting with SV means that we would simply be digging the same hole deeper.
The objective of chip design is to create a data flow and control logic to perform a function. Data flow uses more gates than
controls, but is simpler therefore easier to verify. All the fuss over drivers and not getting valid data on busses is not the most
critical issue. Rather getting the functional logic synthesized and verified comes first. Most ports are simple input to output
connections anyway. Gate counts are next to meaningless, especially in FPGAs. Every 2 input gate uses up a 3 or 4 input LUT
which is equivalent to many "gates. As far as the number of "gates" on a chip, they are typically replicated parts of an SoC
where each part has probably been independently verified. Yes, Verilog was created a long time ago (before OOP anmd all the
abstraction was available, so What? It was a way to simulate HDL, which is a description language. SV is an extension to Verilog
still trying to efficiently simulate logic. The fact that Boolean algebra is the way to describe control logic was ignored. And using
lines of code to infer the complexity of a design is meaningless. Each module port appears in the port list as well as the port definition.
Then the always block is so verbose to simply load a flipflop it too is ridiculous. And most of this comes about because of over
concern about physical design when it should be logic design. I would be happy to participate in a project that focused on logic
design followed by physical design. I have spent many years on projects that used this process -- of course that was before EDA.
 
...Starting with SV means that we would simply be digging the same hole deeper...

Yep, so I'm moving the problem into the C++ domain (http://parallel.cc). I.e. most SV can be represented in C++, and C++ will work with SystemC too (that was the way SV was headed until the Vera guys jumped aboard).

Using a data-flow representation does not tie you to any particular implementation, and you want a sufficiently high level of abstraction that you can synthesize for multiple targets, so you want to get above RTL to get rid of the clocks (but not too far).

I have a Verilog parser already, making it understand OVM/UVM is probably just a few months work.
 
Sorry, but I am not able to figure out if the goal is to design a chip or somehow map an application into hardware. FPGAs have luts,
memory blocks. and flip flops. Clocks are used as a sort of sample and schedule mechanism to avoid race conditions when state
changes propagate and cause other erroneous changes. parC seems to have the same thing disguised as driver resolution and yes,
a clock. Since simulation is key to debug and verification, the debug activity needs a way to work back from an event and determine
the conditions/state that existed when the each event was scheduled. If there is too much abstraction the task gets harder rather
than easier.
 
Sorry, but I am not able to figure out if the goal is to design a chip or somehow map an application into hardware. FPGAs have luts,
memory blocks. and flip flops. Clocks are used as a sort of sample and schedule mechanism to avoid race conditions when state
changes propagate and cause other erroneous changes. ParC seems to have the same thing disguised as driver resolution and yes,
a clock. Since simulation is key to debug and verification, the debug activity needs a way to work back from an event and determine
the conditions/state that existed when the each event was scheduled. If there is too much abstraction the task gets harder rather
than easier.

ParC is intended to be a functional superset of SV that allows you to do the same things (and more). Using an "asynchronous" design style (clockless/data-driven/self-timed) allows compiler/synthesis tools to work out where to put clocks - that gets you to the level above RTL. I have two versions of "game of life" in the examples, one is synchronous with a global clock, the other uses "pipes" and is self-timed (the only "global" thing is the monitoring).

I tried to get a "pipe" construct into SV, but it got rejected because of Vera already having a "mailbox", but there is no symmetry between signals and mailboxes in SV and the mailbox semantics are too complex (as is the SystemC TLM stuff).

For debugging I just use gdb at the moment - there's a video of ddd/gdb running on the code. It's fairly easy to add extra debugging in the simulation kernel for event history if you want that. I did a similar thing with VCS ~ 1994 so I could debug my Verilog at full speed in a proper source-level debugger - unfortunately P. Eichnberger decided to delete the code and everybody suffered a 3x hit for using line callbacks for years (not sure if they've fixed that yet).

The overall goal of ParC is to fold the key features for HDL support into C++ so that there is GNU support for simulation down to transistors (for chip design), and a methodology for programming FPGAs and GP-GPUs - one language for everything.
 
The overall goal of ParC is to fold the key features for HDL support into C++ so that there is GNU support for simulation down to transistors (for chip design), and a methodology for programming FPGAs and GP-GPUs - one language for everything.

The physical aspects of the "programming" (personalization?) of the three are quite different. Transistor(gate level) design has evolved to master slice as opposed to custom design because it is too expensive to custom design. FPGAs have LUTs, flip-flops, memories, and interconnect. GPUs and CPUs have instruction sets. Soft core embedded processors turned out to be slow and expensive, so now hard processors have appeared, and all the architecture and visibility problems are alive and well.

DSP functions on an FPGA are not as much of a problem as real-time/control timing issues. Using tiny C programmable distributed "macro" blocks for functions can achieve higher speed than a processor running an RTOS. In fact it may be analogous to the ASIC master-slice for FPGAs.

There is a somewhat usable C# simulator, It would be interesting to see a parC simulation. It has 4 memory blocks, a Verilog ALU module. and a Verilog control module
 
The physical aspects of the "programming" (personalization?) of the three are quite different....

True, but the idea is move the differentiation down the compiler chain to the back-end, i.e. try to capture the higher level concepts of threading and communication at the C++ level and preserve them through to run-time so that you can take advantage at the hardware level.

Most Verilog/VHDL simulators just translate to C/C++, with ParC the translation is simpler because you don't have to discard structure and information C++ can't represent. Then you can build EDA tools that work at the ParC level which span software and hardware design/verification.
 
The physical aspects of the "programming" (personalization?) of the three are quite different. Transistor(gate level) design has evolved to master slice as opposed to custom design because it is too expensive to custom design. FPGAs have LUTs, flip-flops, memories, and interconnect. GPUs and CPUs have instruction sets. Soft core embedded processors turned out to be slow and expensive, so now hard processors have appeared, and all the architecture and visibility problems are alive and well.
jh2.jpg

2.jpg
 
Back
Top