Migrating from CXX - sys-bio/roadrunner GitHub Wiki
There several possibilities that we were playing with
Rust
Pros
- No performance tradeoffs as there is no runtime or GC.
- Guaranteed memory safety through language semantics (no performance tradeoff)
- Guaranteed thread safety without runtime, meaning that we can safely make roadrunner faster!
- Eliminates null pointer deferences, index out of bounds, and integer over/underflow
- No more CMake or crazy link-time errors, and great dependency management thanks to
cargo
- No more Poco thanks to the library ecosystem supported by
crates.io
- Get inlining from CRTP for free (compiler optimization)
- Python interop through libraries like PyO3 supported by the Rust procedural macro system
Cons
- Compile time can be slower than C++
- Tooling (IDE plug-ins, compile times) have room for improvement
- Safe LLVM bindings (inkwell) are good, but still being developed
- Foreign function interface can only use C bindings
Migration Plan
EDIT: I have a new idea about getting rid of SWIG using the PyO3 library in Rust. This will bring everything for roadrunner into one language and one set of tools: Rust
Also, we can make the Rust to C++ FFI much better by having one FFI function per class that takes an enum (tagged union in C++) that is tagged based on which function to invoke in Rust and containing all the extra parameter data in the enum as arguments.
I was thinking of taking a part of the codebase that is relatively isolated like source/C
and turning some of that into Rust. Because nobody ever touches or uses it, it shouldn't cause development headaches.
We'll see how much of a headache it is to wrap the C++ in C and vice-versa. With something like StringListContainer, we could just replace the entire class with C calls to the Rust class and treat it as an ADT.
One thing to note, it may be difficult migrating classes that inherit (e.g. GillespieIntegrator from Integrator) because C++ classifies these as non-POD (plain old data) types, meaning that their layout is less defined. We could copy the data into a POD type, so that it can be passed across the FFI barrier, but this is makes it costly to use FFI. Of course, this cost will be removed entirely if the entire class is transitioned to Rust.
Once we're confident in the build/link system, we can start on a bigger module. I think the best candidate long term is to replace the LLVMExecutableModel
with a RustExecutableModel
. The Rust implementation would also use LLVM, just through calls to the LLVM-C bindings.
One paint point might be using the llvm-sys bindings because they are largely unsafe. If no one makes safe bindings soon, we can just migrate to the unsafe bindings, and if stuff breaks, then it's not Rust's fault. We can hopefully add some safe abstractions in as needed
Build System
Rust has a very robust build infrastructure in cargo
. It's cross platform and easy to set up. It can emit static libraries (.lib
/.a
), which will then be linked into rr by the C++ compiler.
Timing Passes
You can time the nightly compiler by passing in -Z time-passes
Faster Linking
Cargo ships with the LLVM linking infrastructure lld
and its Windows variant lld-link
. You can use this by adding it to your user's .cargo/config
file, assuming that x86_64-pc-windows-msvc
was your triple. Remember the pc
part!
[target.x86_64-pc-windows-msvc]
linker = "C:\\path\\to\\lld-link"
A small test compiling llvm-sys
with examples shortened build time from 5 min to 4 min.
Compiling Inkwell with examples takes 7 min using lld-link
LLVM Bindings
The Rust project itself has yet to expose their LLVM bindings, but inkwell is the best attempt at making safe LLVM bindings. Unfortunately, due to the nature of building a JIT compiler, we will still have to use unsafe with the JIT'd functions.
llvm-sys
I had to make cargo link to msvcrtd.lib
when building debug (haven't tried release mode yet). I did that by adding this to the build.rs
println!("cargo:rustc-link-lib={}", "msvcrtd");