Common Information for JiTCODE, JiTCDDE, and JiTCSDE¶
In the following, JiTC*DE refers to any of the aforementioned modules.
Unix (Linux, MacOS, …)¶
Usually, you will already have a C compiler installed and need not worry about this step. Otherwise, it should be easy to install GCC or Clang through your package manager. Note that for using Clang, it may be necessary to change the
CCflag (see below). Finally note that getting OpenMP support (see below) for Clang on MacOS seems to be a hassle.
Python should be installed by default as well.
The easiest way to install JiTC*DE is via PyPi like this:
pip3 install jitcode --user
jitcsdeif that’s what you want. Replace
pipif you are working in an environment.
Open the Anaconda Prompt and run:
pip install jitcode --user
jitcsdeif that’s what you want.
Building from source¶
Usually you do not need to do this, but it may be the only way if prepackaged SymEngine doesn’t work on your system.
Install SymEngine from source following the instructions here.
Install the SymEngine Python bindings from source following the instructions here.
Install readily available required Python packages, namely Jinja 2, NumPy, SciPy, and Setuptools.
Install JiTC*DE Common and the desired packages from GitHub. The easiest way to do this is probably:
pip3 install git+git://github.com/neurophysik/jitcode
Here is a summary of commands for Ubuntu (that should be easily adaptable to most other Unixes):
sudo apt install cmake cython3 libgmp-dev git libgmp-dev python3-jinja2 python3-numpy python3-scipy python3-setuptools git clone https://github.com/symengine/symengine cd symengine cmake . make sudo make install pip3 install \ git+https://github.com/symengine/symengine.py \ git+https://github.com/neurophysik/jitcxde_common \ git+https://github.com/neurophysik/jitcode \ git+https://github.com/neurophysik/jitcdde \ git+https://github.com/neurophysik/jitcsde \ --no-dependencies --user
Testing the Installation¶
Each module provides a utility function that runs a short basic test of the installation, in particular whether a compiler is present and can be interfaced. For example, you can call it as follows:
import jitcode jitcode.test()
Networks or other very large differential equations¶
JiTC*DE is specifically designed to be able to handle large differential equations, as they arise, e.g., in networks. There is an explicit example of a network in JiTCODE’s documentation, which is straightforward to translate to JiTCDDE and JiTCSDE.
JiTC*DE structures large source code into chunks, the size of which can be controlled by the option
chunk_size, which is available for all code-generation subroutines.
This has two reasons or uses:
If JiTC*DE handled the code for very large differential equations naïvely, a problem would arise from the compiler trying to handle megabytes of unstructured code at once, which may use too much time and memory. For some compilers, disabling all optimisation can avert this problem, but then, compiler optimisations usually are a good thing. Chunking is a compromise between the two: Optimisation still happens within chunks, but not across chunks. We obtained better performances in these regards with Clang than with GCC.
It allows a reasonable parallelisation using OpenMP (see the next section). Note that
chunk_sizehere is also used for regular loops and similar.
If there is an obvious grouping of your \(f\), the group size suggests itself for
For example, if you want to simulate the dynamics of three-dimensional oscillators coupled onto a 40×40 lattice and if the differential equations are grouped first by oscillator and then by lattice row, a chunk size of 120 suggests itself.
Also note that simplifications and common-subexpression eliminations may take a considerable amount of time (and can be disabled).
In particular, if you want to calculate the Lyapunov exponents of a larger system, it may be worthwhile to set
OpenMP Support (multi-processing)¶
Code generated by JiTCODE contains OpenMP pragmas that will make the compiler automatically compile the code such that it will be parallelised on a multi-kernel machine – if the right respective compiler and linker flags are used and the respective libraries are installed.
Each compiling command has an argument
omp that when set to
True will cause the most generic of these flags to be used.
Depending on the compiler, these flags may not work or not be the best choice.
In this case you pass the desired compiler and linker flags as a pair of lists of strings to the
For example, for GCC, you might use:
ODE.compile_C( omp=(["-fopenmp"],["-lgomp"]) )
In most cases, the chunk sizes used by OpenMP correspond to the chunk_size argument of the respective code-generating instruction.
Note that parallelisation comes with a considerable overhead. It is therefore only worthwhile if both:
Your differential equation is huge (ballpark: hundreds of instructions).
You have fewer problems (realisations) than cores or cannot run several problems in parallel due to memory constraints or similar.
Choosing the Compiler¶
You can find out which compiler is used by explicitly calling
I is your JiTC*DE object.
Linux (and other Unixes, like MacOS)¶
Setuptools uses your operating system’s
CC flag to choose the compiler.
Therefore, this is what you have to change, if you want to change the compiler.
Some common ways to do this are (using
clang as an example for the desired compiler):
export CC=clangin the terminal before running JiTC*DE. Note that you have to do this anew for every instance of the terminal or write it into some configuration file.
os.environ["CC"] = "clang"in Python.
So far, Clang has proven to be better at handling large differential equations.
I haven’t tried it myself, but this site should help you.
Choosing the Module Name¶
The only reason why you may want to change the module name is if you want to save the module file for later use (with
To do this, use the
modulename argument of the
If this argument is
None or empty, the filename will be chosen by JiTC*DE based on previously used filenames or default to
Note that it is not possible to re-use a module name for a given instance of Python (due to the limitations of Python’s import machinery).
SymPy vs SymEngine¶
SymPy’s core is completely written in Python and hence rather slow. Eventually, this core shall be replaced by a faster, compiled one: SymEngine, more specifically the SymEngine Python wrapper. SymEngine is not yet ready for this, but it already has everything needed for JiTC*DE’s purpose, except for some side features like common-subexpression elimination and lambdification (only for JiCDDE). Also SymEngine internally resorts to SymPy for some features like simplification. By using SymEngine instead of SymPy, code generation in JiTC*DE is up to nine hundred times faster.
Practically, you can use both SymPy and SymEngine to provide the input to JiTC*DE, as they are compatible with each other.
However, using SymPy may considerably slow down code generation.
Also, some advanced features of SymPy may not translate to SymEngine, but so far the only one I can see making sense in a typical JiTC*DE application are SymPy’s sums and those can be easily replaced by Python sums.
If you want to preprocess JiTC*DE’s input in some way that only SymPy can handle, the
sympy_symbols submodule provides SymPy symbols that work the same as what
jitc*de provides directly, except for speed.
Here is an example for imports that make use of this:
from jitcode import jitcode from jitcode.sympy_symbols import t,y
Note that while SymEngine’s Python wrapper is sparsely documented, almost everything that is relevant to JiTC*DE behaves analogously to SymPy and the latter’s documentation serves as a documentation for SymEngine as well. For this reason, JiTC*DE’s documentation also often links to SymPy’s documentation when talking about SymEngine features.
Many dynamics contain a step function, Heaviside function, conditional, or whatever you like to call it. In the vast majority of cases you cannot naïvely implement this, because discontinuities can lead to all sorts of problems with the integrators. Most importantly, error estimation and step-size adaption requires a continuous derivative. Moreover, any Python conditionals will be evaluated during the code generation and not at runtime, which not what you want in this case.
There are two general ways to solve this:
If your step-wise behaviour depends on time (e.g., an external pulse that is limited in time), integrate up to the point of the step, change
for a control parameter, and continue. Note that for DDEs this may introduce a discontinuity that needs to be dealt with like an initial discontinuity.
Use a sharp sigmoid instead of the step function.
jitcxde_commonprovides a service function
conditionalwhich can be used for this purpose and is documented below.
- conditional(observable, threshold, value_if, value_else, width=None)¶
Provides an smoothed and thus integrator-friendly version of a conditional statement. For most purposes, you can imagine this being equivalent to:
def conditional(observable,threshold,value_if,value_else): if observable<threshold: return value_if else: return value_else
The import difference is that this is smooth and evaluated at runtime.
widthcontrols the steepness of the sigmoidal used to implement this. If not specified, this will be guessed – from the threshold if possible.
Common Mistakes and Questions¶
If you want to use mathematical functions like
sqrtyou have to use the SymEngine variants. For example, instead of
numpy.sin, you have to use
If you get unexpected or cryptic errors, please run the respective class’s
checkfunction and also check that all input has the right format and functions have the right signature.
If your integration produces implausible results or raises
UnsuccessfulIntegration, please check that your integration parameters (error tolerances, etc.) and sampling step make sense for your problem. The default settings of JiTC*DE (like that of most integration modules) work best when:
All dynamical variables have the same order of magnitude, which in turn is close to 1.
The order of magnitude of the smallest time scale of the dynamics is 1.
The sampling step is a one to three orders of magnitude smaller than the smallest time scale of the dynamics.
If JiTC*DE’s code generation and compilation is too slow or bursts your memory, check:
Everything in the previous point.
Is your sampling step reasonably large, i.e., somewhat smaller than the smallest time scale of your dynamical system?
Did you deactivate simplifications and common-subexpression eliminations?
Did you use a generator?
Did you use chunking?
Does disabling simplification or common-subexpression elimination (for all applicable processing steps) help?
Did you use SymEngine symbols and functions instead of SymPy ones?
Consider using Clang as a compiler.
If the memory used by the compiler is a problem, try to reduce the optimisation level via compiler flags (see Compiler and Linker Arguments), e.g., use
There is a remote chance that you see get a
ValueError: “assignment destination is read-only” when working with arrays returned from JiTC*DE. This is because the respective array directly accesses JiTC*DE’s internal state for efficiency and if you could write to this, bizarre errors would ensue. If you want to modify such an array, you must make a copy of it first.