Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/announcing-the-open-source-release-of-the-xyce-circuit-simulator.3416/page-2
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Announcing the Open Source Release of the Xyce Circuit Simulator

Thanks a lot to make this powerful oscillator available as open source. Hadn't any problem to compile the parallel version of Xyce following your instructions. In my excitement I tried first simulations on a netlist which uses recent MOST models from a fab. I used the HSpice versions. But one thing was conspicuous: the triple operator "? :" couldn't be recognized. So I'd to replace all the lines in the models containing this operator with the IF(a,b,c) command. Another thing appears when trying to start the simulation. The .option command is not supported -- this is at least what Xyce told me. Maybe I had included some errors which where not obvious or I need to include some additional parameters during the compilation. Maybe there are also other possibilities to set the simulator options (if yes, how?) .. but so far I'm very happy with my first results.

Another thing during comparison between a parallel ngspice and parallel Xyce was that ngspice is able to apply multiple threads very efficient even with a small number of devices. Means, even with a number of devices < 100 one can observe a very good scaling effect. This is certainly due to the kind of how the ngspice programmers have implemented the parallelism. They create multiple threads only during the model-evaluation. Maybe this kind of implementation is something witch can be considered by the Xyce team, too -- maybe depending on the number of devices? Could be helpful for very long simulations ..

So far my first comments. Thanks again for this wonderful piece of software.
 
Thanks very much for your kind comments. Regarding some of the Hspice net list features, we aren't 100% compatible with Hspice. We probably could/should support the triple operator, but I'm not sure how soon that will get into the code.

Regarding .option; you are right, we don't support that, exactly. We do support a related keyword ".options", which is then followed by a numerical package name. So, if you want to set solver options for the time integrator, it would be ".options timeint" followed by time integration options such as method=trap or method=gear. If you want to set nonlinear solver options, it would be ".options nonlin" followed by (for example) maxstep=100.

The reason that we did this was that one of the goals for the Xyce projects was to do research into solver algorithms, and we felt that we needed more flexibility than the SPICE .option command would allow. To be honest, even our .options approach has not scaled that well and we've considered switching to something better than that.

Regarding parallel scaling; if you are running a small problem in parallel, I recommend trying Xyce's "parallel load, serial solve" option. It will do the device evaluations in parallel, but still do the linear solve in serial on proc 0. For small problems, the vast majority of the compute time is in the device evaluations, so this actually works really well. If you run a small problem in parallel, Xyce *should* do this automatically, but you have specify the number of processes manually, i.e. mpirun -np 4 Xyce inputfile. cir <return> . The main thing that is "automatic" is that Xyce will detect that the problem is small, and then choose a serial direct solver as the linear solver, rather than an parallel iterative solver. But, you'll still have to set the number of processors manually.

We have experimented with mixed parallelism (adding threading to the existing MPI implementation), but that isn't ready for prime time.
 
Thank you so much for your kind words about Xyce.

Another thing appears when trying to start the simulation. The .option command is not supported -- this is at least what Xyce told me. Maybe I had included some errors which where not obvious or I need to include some additional parameters during the compilation.

Yes, this is one big difference between Xyce and the variants of SPICE. Xyce's options are not specified the same way as SPICE's. Rather than have a single ".option" statement that provides access to all simulator parameters, Xyce has a set of ".options <package>" commands that set options for different parts of the simulator (time integrator, device package, harmonic balance, homotopy, etc.).

A complete description of the syntax is provided in the Reference Guide that is available on the web site.

As far as multiple threads goes, yes, we do have plans to implement some form of small-scale parallelism using threads, but because Xyce's focus is on large scale simulation it hasn't been as much of a priority.
 
FYI - while at Mentor Graphics we made a FastSPICE circuit simulator compatible to HSPICE syntax, and that required a few man-years of development effort.
 
Thanks, Daniel. I've heard similar estimates from a few other people. We've basically added Hspice compatibility as we've had time and as our users have requested it over the years. We didn't start out with the intent to be Hspice compatible; its a need that evolved with the project.

I've heard some talk (I think with the CMC/Si2) of standardizing the SPICE net list language, so it would (in theory) be easier for simulators to be compatible. If that were to ever come to fruition, it would make life a lot easier.
 
I'll still add that the sooner the project gets thrown onto GitHub, the sooner an interested community would gladly contribute, even if it's things like an HSPICE-compatibility test suite/TravisCI hookups to monitor :)
 
I have just finished building Xyce from the source code, and thought I should leave a note in case anyone might be interested in who is working with it. I built it on a early model Apple MacPro which was originally a quad core, but has since been modified to be an eight core machine. The replacement processors are a pair of older quad core Xeons (X5355 2.66GHz ) which are not hyperthread enabled. The machine has 10Gb of ram and 1 Tb of hard disk storage using a quartet of 250 Gb drives in a RAID-0 array. The machine has had OSX entirely scrubbed from it. The boot loader has been reconfigured to boot Linux from standard legacy drives and run it natively (ex. no Bootcamp, Parallels or other vm). The Linux distribution is a custom one I designed and built directly from source code archives. I understand that Xyce is a parallel simulation platform using OpenMPI, but is not otherwise multithreaded as such. Is this correct ? If so, are there any plans to incorporate multithreading support into it ? This might make it more useful for those having only a few multicore machines rather than a large processing cluster of separate machines. I gather that this is mostly the case for engineers or small contractors with their own private facilities, who I suspect would be the most likely members of the public to have an interest. Are there any support forums or email lists available to which I might refer ? If not, are there plans for any ? What is the recommended procedure for bug reports ? Is there a bug tracking facility available ? Is xyce@sandia.gov still active ? I tried sending a message there a couple of days ago, but haven't heard back, so I don't know if it got through
 
Are there any support forums or email lists available to which I might refer ? If not, are there plans for any ? What is the recommended procedure for bug reports ? Is there a bug tracking facility available ? Is xyce@sandia.gov still active ? I tried sending a message there a couple of days ago, but haven't heard back, so I don't know if it got through

xyce@sandia.gov is still active and we received your email, but the person who will be responding to you has been too busy to answer as yet. The team is prepping its next public patch release (6.0.1, due any minute) and scrambling to finish work on the next version (6.1, public release likely sometime in April). In the meantime, I can give you a quick answer to some of your questions.

As yet there are no discussion or support forums for Xyce. There are plans for one. We don't actually have a lot of users yet, so xyce@sandia.gov is the only thing we have, and it's one way from you to us. We're working on it.

Reporting bugs is probably best done through the xyce@sandia.gov address for now. The logistics of supporting a public bug tracker in addition to the ones we have internally are complex, and we probably won't have an external issue tracker for some time.

At the moment, you are correct: Xyce is not multithreaded except for its OpenMPI implementation. Mid-sized (thousands of devices) problems can benefit from the "serial solve, parallel load" capability in MPI mode on small numbers of processors, even if they're just multiple cores on one machine. Some benefit from threading on certain types of problems can be had by linking to threaded math libraries such as Atlas or the Intel MKL.
 
I'll add a quick reply here; we have experimented a bit with threading (mostly with OpenMP). There have been two separate efforts. One was to thread the device evaluations, which is an embarrassingly parallel operation. The other has been within some of the linear solvers, which is a bit harder to do effectively.

In our first attempt at threading device evaluations, we found that compared to modern implementations of MPI, it didn't do any better, even on relatively small multicore machines. And, of course to do it safely we had to put in various memory locks which made the code fairly ugly. The most recent MPI implementations tend to be very clever about how they are implemented for modern architectures. So threading didn't seem to buy us much, given that we already had an MPI-based parallelism that worked really well. We will probably revisit this issue again in the future. If we had not already set up the code to be MPI-parallel, I'm sure we would have pushed harder on a threading implementation.

On the linear solver level, that is still a work in progress. The main effort has been with a solver code called ShyLU, under development at Sandia. It is a "hybrid-hybrid" solver, which combines iterative and direct methods, and also combines MPI and threading-based parallelism. That solver has worked pretty well on some circuit problems, but is still in a research phase, so it isn't in the default build of Xyce. You can read more about ShyLU here:

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6267865&tag=1

Finally, to reiterate something Tom said; you can get some benefit from threading just by linking Xyce against threading-enabled Lapack and BLAS libraries. For some problems (ones that spend most of their time in the linear solve), this seems to help a bit.
 
6.0.1 regression tests - serial build

I ran the 6.0.1 regression tests on a serial build. I note that the following three tests fail : Certification_Tests/XyceAsLibrary/FFTInterface, Certification_Tests/XyceAsLibrary/BUG_1685/c7i, and XygraAPI/xygra. In all three cases this appears to be due to the following setting: $XYCEROOT="missing "; This is used as the root directory for paths used in these three tests. Is the test code for these unfinished ?
 
I ran the 6.0.1 regression tests on a serial build. I note that the following three tests fail : Certification_Tests/XyceAsLibrary/FFTInterface, Certification_Tests/XyceAsLibrary/BUG_1685/c7i, and XygraAPI/xygra. In all three cases this appears to be due to the following setting: $XYCEROOT="missing "; This is used as the root directory for paths used in these three tests. Is the test code for these unfinished ?

I just answered your email to xyce@sandia.gov in more detail, but for the record:

Those three tests will work properly only if you follow *exactly* the procedure in the "Running the Xyce Regression Suite" web page, which directs you to run the tests from the directory in which you just built Xyce, with the binary that lives in `pwd`/src/Xyce. If you try to use the same command line, but point the run_xyce_regression script at the location of the binary installed by "make install" you will get a failure of those three tests. The issue is that those three tests attempt to link a small test program against a "libxyce.a" library, and the script is unable to deduce the location of the source code from the location of the binary unless they live in the same directory tree.

Those tests *do* pass in all builds of Xyce 6.0.1 that we've tried on all platforms, so long as the relationship between the binary and the test harness directories is maintained as the script expects it.

It's why our Executables page tells you to add the "-library" tag to the tags list if testing a pre-built binary (that tag causes those three tests to be excluded, precisely because they'll fail in exactly the way you have discovered).
 
[...]
It's why our Executables page tells you to add the "-library" tag to the tags list if testing a pre-built binary (that tag causes those three tests to be excluded, precisely because they'll fail in exactly the way you have discovered).

Because this information was spread out across two different pages, it was easy to miss. I just updated the "Running the Xyce Regression Suite" page to have a section at the bottom (after "Some tests are fragile") about these particular tests, and what you have to do if you're testing installed code (as opposed to just-built code).
 
The Xyce(TM) team is pleased to announce the release of Xyce(TM) Version 6.1.

This release is a minor release that includes significant improvements over Xyce(TM) versions 6.0 and 6.0.1. Please see the Release Notes on the Xyce(TM) web site, http://xyce.sandia.gov, for a complete list of new features and enhancements.

Highlights for Xyce(TM) Release 6.1 include:

o The device model package interface has been refactored, simplifying the process of adding new devices.
o The ADMS/Xyce package has been improved, making import of Verilog-A models simpler and more robust.
o The BSIM-CMG model 107 has been added.
o A new digital buffer model has been added, and the AND, NAND, OR, and NOR gates have been updated to accept more than two inputs.
o AC, harmonic balance, and MPDE analyses are now supported in parallel.
o The robustness of harmonic balance analysis has been improved.
o Voltage limiting handling in both the Trapezoid and Gear time integrators has been improved.
o Numerous bugs have been fixed.

For details, see the Xyce(TM) Users' Guide and the Xyce(TM) Reference Guide.

The team is also pleased to announce the creation of an open forum for discussion of Xyce on Google Groups. Details on how to join this forum are on the Documentation and Tutorials page of the web site, http://xyce.sandia.gov/documentation/index.html.

To obtain a copy of Xyce(TM) Release Version 6.1, please see the downloads section of the web site: http://xyce.sandia.gov/downloads

Thank You,
The Xyce(TM) Team


Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
 
Back
Top