Development
Quickstart
Clone the repository:
git clone https://github.com/Systems-Theory-in-Systems-Biology/EPI.git
git clone git@github.com:Systems-Theory-in-Systems-Biology/EPI.git
Should I choose https or ssh?
You can clone the repository over https or ssh. Use https if you only want to obtain the code. Use ssh if you are a registered as developer on the repository and want to push changes to the code base. If you want to contribute to the project but are not a registered developer, create a fork of the project first. In this case, you have to clone your fork, not this repository.Install poetry:
curl -sSL https://install.python-poetry.org | python3 -
Install dependencies:
For amici (sbml):
sudo apt install swig sudo apt install libblas-dev sudo apt install libatlas-base-dev sudo apt install libhdf5-dev
For cpp:
sudo apt install cmake sudo apt install libeigen3-dev sudo apt install pybind11-dev
Install eulerpi:
poetry install --with=dev --extras=sbml
Run the tests:
poetry run pytest
You can add the
--verbose
parameter to get a more detailed report.
Maintaining the repository
Here are the most important information on how to maintain this repository.
Dependency Management with Poetry
We use poetry as build system, for the dependency management and the virtual environment. During the Quickstart we installed all dependencies into the virtual environment, therefore:
IMPORTANT
Run all commands in the next section in the poetry shell. It can be started with poetry shell
. Alternatively, you can run commands with poetry run <yourcommand>
.
Run poetry add package_name
to add the library/package with the name package_name
as dependencie to your project. Use poetry add --group dev package_name
to add package_name
to your dev
dependencies. You can have arbitrary group names.
For more information read the Poetry Documentation.
Code quality checks
We use black, flake8, isort to maintain a common style and check the code. Please check your code install the pre-commit hook:
pre-commit install
You can also check your changes manually:
pre-commit run --all-files
Testing with pytest
pytest
You can generate a coverage report by running the following code block in your terminal. Please be aware that it might take a long time, think about lowering the number of steps in the sampling.
coverage run -m pytest -v
coverage report
coverage html
Running the tutorial (jupyter notebook)
The jupyter notebook can be run using
vs code: https://code.visualstudio.com/docs/datascience/jupyter-notebooks
shell + browser:
jupyter notebook
In the first case you need to select the poetry environment when selecting the interpreter, in the second case you need to run the command in the poetry shell.
Profiling with scalene
You can profile eulerpi with scalene (or gprofile) using the commands:
python3 -m pip install -U scalene
scalene tests/profiling.py
This will create a profile.html
file, which you can open using your browser. Do not rely on the OPENAI optimization proposals. They are often plain wrong in scalene.
Documentation with Sphinx
cd docs
sphinx-apidoc -e -f -o source/ ../
make html
All extensions of sphinx which are used to create this documentation and further settings are stored in the file docs/source/conf.py
.
If you add extensions to conf.py
which are not part of sphinx, add them to the docs/source/requirement.txt
file to allow github action mmaraskar/sphinx-action@master
to still build the documentation.
A cheatsheet for reStructuredText with Sphinx.
Hosting with GitHub Pages
To publish the documentation on github pages you probably have to change some settings in the GitHub Repository
Settings -> Code and automation -> Pages -> Build and Deployment:
- Source: Deploy from a branch
- Branch: gh-pages && /(root)
Changelog
We use the Keep a Changelog format for the changelog. It should be updated with every pull request.
Versioning
We use Semantic Versioning. A version number is composed of three parts: major.minor.patch
The major version should be incremented when you make incompatible changes.
The minor version should be incremented when you add new functionality in a backward-compatible manner.
The patch version should be incremented when you make backward-compatible bug fixes.
Every time a new version is tagged, a GitHub Action workflow is triggered which builds and uploads the version to pypi.
Please update the version number in the pyproject.toml
file before tagging the version.
Test Deployment to TestPyPi
You have to set-up testpypi once:
poetry config repositories.testpypi https://test.pypi.org/legacy/
poetry config http-basic.testpypi __token__ pypi-your-api-token-here
Build and deploy:
poetry build
poetry publish -r testpypi
Test this with
python3 -m pip install --index-url https://test.pypi.org/simple/ --no-deps eulerpi
Deployment with GitHub CI (recommended)
Checkout the main branch
git checkout main
Update the version number
X.X.X
inCHANGELOG.md
andpyproject.toml
Set a new version tag
git tag -a vX.X.X -m "Release version X.X.X"
and push itgit push origin vX.X.X
Check if the CI deployment was successful on GitHub and finally on PyPi. The CI and PyPi may needs some time to run and update.
Deployment with Poetry (not recommended)
You have to set-up pypi once:
poetry config pypi-token.pypi pypi-your-token-here
Build and deploy:
poetry publish --build
Test this with
pip install eulerpi[sbml]
Jax with CUDA
Jax can be run with cuda on the gpu. However, you need a recent nvidia-graphics-driver, the cuda-toolkit (cuda) and cudnn installed. Getting the versions right can cause headaches ;)
I used the following tricks to get it running:
# for cuda toolkit
export CUDA_HOME=/usr/local/cuda
export PATH="/usr/local/cuda-12.0/bin:$PATH"
export LD_LIBRARY_PATH="/usr/local/cuda-12.0/lib64:$LD_LIBRARY_PATH"
export LD_LIBRARY_PATH="/usr/local/cuda-11-fake/lib64:$LD_LIBRARY_PATH"
Follow this issue to see whether you possibly need to create the cuda-11-fake folder as a copy from the 12.0 folder and “create the .so.11” libraries as symlinks using the script below.
#!/bin/bash
# Find all files in the current directory that match the pattern "lib*.so"
for file in lib*.so; do
# Extract the base name of the file (without the ".so" extension)
base_name="${file%.*}"
# Construct the name of the symbolic link we want to create
link_name="${base_name}.so.11"
# Check if the link already exists
if [ ! -e "$link_name" ]; then
# Create the symbolic link if it doesn't exist
ln -s "$file" "$link_name"
fi
done
It can happen that old code is executed due to the generated pycache. For example, an old version of cuda or cudnn could be used. If you believe that this is happening:
pip install pyclean
py3clean .