Advance reconstruction with Tyger - josalggui/MaRGE GitHub Wiki
From release v1.0.0, MaRGE have compatibility with Tyger. The open-source toolkit from Microsoft to stream signal data to and from remote compute for on-demand processing.
The MaRGE–Tyger integration enables seamless execution of remote reconstruction and processing pipelines directly within the MaRGE workflow. Its architecture, illustrated in next figure, is designed to bridge local data acquisition with scalable remote computation, allowing users to leverage containerized processing resources without disrupting the standard scanning procedure.

In this integration, native MaRGE raw data files are first converted locally from the proprietary .mat format into the open MRD (Magnetic Resonance Data) standard. Alongside the converted dataset, MaRGE generates a .yaml configuration file that specifies the selected reconstruction or processing pipeline and defines all required inputs. These parameters are configured directly within the MaRGE graphical environment, ensuring consistency between acquisition settings and remote execution.
The MRD dataset and corresponding configuration file are then streamed to a remote computing resource via Tyger. The requested pipeline is executed inside a Docker container, ensuring reproducibility and environment isolation. When deployed in a cloud environment (e.g., Azure), the required container image can be dynamically retrieved from a public registry; alternatively, it may be pre-deployed on a designated remote node. Upon completion of processing, the reconstructed data are streamed back to the local system and automatically integrated into the MaRGE interface.
The MaRGE–Tyger integration extends the standard acquisition workflow with advanced remote and model-based processing features. The current implementation supports the following capabilities:
-
Streaming of high-demanding tasks
Computationally intensive reconstruction and processing pipelines can be offloaded to remote computing resources through Tyger. Raw MRD datasets and their associated configuration files are streamed for execution within containerized environments, enabling scalable performance without impacting local scanner operation. -
SNRAware processing using the local model
The framework supports SNRAware reconstruction strategies that leverage a locally available noise model. This improves robustness and reconstruction quality under varying noise regimes. -
Reconstruction with prior knowledge of the magnetic field
Reconstruction pipelines can incorporate prior information about the magnetic field distribution. By integrating field knowledge into the processing stage, the system enables improved correction of field-related distortions and enhanced image fidelity.
The requirements for using tyger reconstruction and tyger denoising with snraware are the following:
- GPU. When running tyger in a local computer, a GPU NVidia 5060 has been tested as a good enough GPU for performing both reconstruction and denoising with aproximate time of 2 min.
- The corresponding nvidia toolkit, drivers and CUDA version for the GPU that will be used.
- If using it in local, having Docker installed and running is mandatory.
- SSH installed and running on the satellite PC for remote access. Not needed if the same PC is used for both acquisition and processing.
- Ubuntu 20.04 or later on any PC running Tyger. The installation instructions in this guide are Ubuntu-specific.
- Network connectivity: a stable internet connection for TEP cloud use, or a dedicated Ethernet link between the scanner PC and the satellite GPU node for local setups.
- Minimum 16 GB RAM recommended for the GPU reconstruction node, particularly when running deep learning models or iterative reconstructions.
The integration supports both acquisition-time and post-processing workflows. Within the acquisition interface, authenticated access to Tyger services is provided through the graphical interface, and users can select the desired processing pipeline prior to sequence execution. Reconstruction results are returned and displayed automatically as part of the acquisition output, requiring no additional intervention beyond launching the scan. To prevent disruption of scanner operation, MaRGE supports parallel execution: new sequences may be initiated while previously acquired datasets are being transmitted or reconstructed remotely.
In addition to real-time acquisition workflows, Tyger functionality is also accessible from the post-processing interface. Users can apply remote pipelines to previously acquired raw datasets through a dedicated Tyger tab, extending the same remote execution capabilities beyond the live scanning context.
All features described here are implemented in the current version of the MaRGE repository.
-
Tyger server: address of the server where processing will take place. It may be a Tyger TEP server URL, a remote satellite PC, or the local computer itself. For local or satellite use, the format is
ssh://user@<IP>:22/opt/tyger/api.sock. -
Tyger batch size: batch size used for image reconstruction with prior knowledge. When using the Docker image for distortion correction (
ghcr.io/teresaguallartnaval/dist_corr_tyger:v1), the batch size sets the amount of data transferred per iteration. A larger value is faster but may overflow GPU memory depending on the image size to process. For an Nvidia 5060 the range would go between 200 and 500. For Tyger TEP the batch size is usually set to 1000. -
SNRAware version:
None,Local, orTEP, according to the SNRAware version to use. -
Docker for distortion correction: name of the Docker image to use for distortion correction (e.g.
ghcr.io/teresaguallartnaval/dist_corr_tyger:v1). This is a Docker image name, not a path to a local Dockerfile.
Tyger TEP (Technology Evaluation Program) provides access to cloud computing resources hosted on Microsoft Azure. Experiments connect to an NVIDIA GPU rack hosted online.
To run a reconstruction through Tyger TEP:
- Authenticate with the TEP server using the Tyger login button in the MaRGE graphical interface. Valid TEP credentials provided by Microsoft are required.
-
Select the desired pipeline from the Tyger tab in MaRGE. For SNRAware denoising, set the SNRAware version to
TEP. - Run the sequence normally, implemented for both Rare_PyPulseq and Rare_DoubleImage. MaRGE will automatically convert the raw data to MRD format, generate the YAML configuration file, and stream both to the TEP server.
- The reconstructed data will be returned and displayed in the MaRGE interface automatically upon completion.
Parallel execution is supported: new sequences can be launched while previous datasets are being transmitted or reconstructed remotely, avoiding idle scanner time.
A local satellite PC equipped with a GPU can be used as the Tyger compute node instead of cloud infrastructure. The remote PC must have Tyger installed and configured (see Installation).
Once the remote PC is set up, connect from the scanner PC via SSH:
tyger login ssh://user@<REMOTE_IP>:22/opt/tyger/api.sockFor example, if the remote PC has IP 192.168.10.2:
tyger login ssh://[email protected]:22/opt/tyger/api.sockVerify the connection:
tyger run listEmpty brackets [] indicate a successful connection with no prior runs. Once connected, the workflow within MaRGE is identical to the TEP workflow: select the pipeline from the Tyger tab and execute the sequence normally.
SNRAware open-source denoising uses the large model from the set of pre-trained models publicly released by Microsoft on their SNRAware GitHub repository. This model was trained exclusively on high-field MRI data using SNR-unit normalization and is directly applicable to low-field acquisitions.
The Docker image will be automatically set when choosing between Local or TEP in the configuration Menu for the SNRAware version. The Local option will run the model large, and the TEP option will run the full Microsoft SNRAware model. If the Local option is selected, the Docker image encapsulating the SNRAware repository together with a MaRGE-compatible interface wrapper is:
ghcr.io/mrilab-i3m/snraware_local:v1
If it's the first time running the model it will take some time to execute. That is because the image will be first downloaded, if not in the local PC, and then executed.
When using TEP, the SNRAware denoising model is provided by Microsoft Research and executed on the TEP cloud infrastructure. The workflow is equivalent to the open-source version described above, with the difference that the model runs on the remote Azure GPU rather than on a local node.
In MaRGE, set the SNRAware version to TEP in the Tyger tab. Authenticated access to the TEP server is required
Distortion correction through conjugate-phase (CP) reconstruction requires prior knowledge of the B₀ magnetic field distribution of the scanner. This field map is obtained using the SPDS (Single-Point Double-Shot) sequence, which is integrated as a dedicated sequence within MaRGE. The Docker image must be stablished at the Configuration tab for tyger. As for now two options are available:
ghcr.io/teresaguallartnaval/dist_corr_tyger:v1
ghcr.io/mrilab-i3m/rare-recon:v1
Workflow:
- Acquire the B₀ field map using the SPDS sequence in MaRGE. Following acquisition, MaRGE automatically fits the field map with a polynomial model and stores the result in a text file.
-
Enable distortion correction in the MaRGE Tyger tab. Select the stored B₀ field map file (
.txt) in the corresponding field, saved at MaRGE/marge/b0_maps/fits.

- Run the RARE sequence. MaRGE will include the field map as an additional input in the YAML configuration and stream it to the compute node alongside the MRD dataset.
- The CP reconstruction is executed remotely and the corrected image is returned to MaRGE.
The B₀ field map can be reused across multiple acquisitions as long as scanner conditions remain stable, without needing to repeat the SPDS mapping.
Distortion correction can be combined with denoising by enabling both options simultaneously in the Tyger tab. In this case, denoising is always applied first, as it relies on normalization of the input data to SNR units — a step that cannot be accurately performed after the distortion correction transform has modified the noise background.
This section covers the installation of all required software components for running Tyger reconstructions. Two scenarios are addressed: local execution (reconstruction performed on the same PC as the scanner) and remote execution (reconstruction offloaded to a separate satellite PC connected via Ethernet). For the latest and most up-to-date Tyger installation instructions, refer to the official Tyger documentation.
Download the latest release from:
https://github.com/microsoft/tyger/releases
Download the tyger_linux_x86_64.tar.gz file. In a terminal, navigate to the downloads folder and run:
tar -xvzf tyger_linux_x86_64.tar.gz
sudo cp tyger /usr/local/binVerify the installation:
tyger --versionIf NVIDIA drivers are not yet installed:
sudo apt install nvidia-cuda-toolkit
sudo apt-get update
sudo apt-get install -y nvidia-driver-570-open
sudo rebootNote: A reboot is mandatory to activate the drivers. The commands above install the
570-opendriver, which has been tested with an NVIDIA GeForce RTX 5060 Ti. For a different GPU, first list available drivers with:sudo ubuntu-drivers list --gpgpuThen install automatically (letting the system choose the best driver):
sudo ubuntu-drivers installOr install a specific version (e.g., 570):
sudo ubuntu-drivers install nvidia:570
sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
apt-cache policy docker-ce
sudo apt install docker-ce
sudo systemctl status dockerThe last command should show Docker with status running.
Allow running Docker without sudo:
sudo usermod -aG docker $USER
newgrp docker
groupsVerify Docker works correctly:
docker run --rm hello-world1. Add NVIDIA container repository:
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /etc/apt/keyrings/nvidia-container-toolkit.gpg
sudo chmod a+r /etc/apt/keyrings/nvidia-container-toolkit.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/etc/apt/keyrings/nvidia-container-toolkit.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list > /dev/null2. Install NVIDIA container toolkit:
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit3. Restart Docker:
sudo systemctl restart docker4. Additional configuration (required if the GPU test fails):
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart dockerTest GPU access in Docker:
docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smidocker pull ghcr.io/mrilab-i3m/snraware_local:v1
docker pull ghcr.io/mrilab-i3m/rare-recon:v1Create the Tyger working directory:
sudo mkdir -p /opt/tyger
sudo chown -R "$(id -u):$(id -g)" /opt/tygerImportant: The
chownstep is required to avoid permission conflicts in the following steps, which are incompatible withsudo.
Create the configuration file:
tyger config create -f /opt/tyger/config.ymlIn the interactive prompt, select Docker as the deployment environment (first option) and press Enter to accept the defaults for the remaining options.
Install the Tyger API:
cd /opt/tyger
tyger api install -f config.ymlVerify the installation and login:
export TYGER_SOCKET_PATH=/opt/tyger/api.sock
tyger login --local
tyger run listEmpty brackets [] confirm a successful setup. For local-only use (reconstruction on the scanner PC itself), configuration ends here.
Install and enable the SSH server:
sudo apt update
sudo apt install openssh-server
sudo systemctl enable --now ssh
sudo systemctl status sshIdentify the available network interfaces:
ip aSelect the Ethernet interface to use (e.g., enp4s0) and assign a static IP within the same subnet as the scanner PC. For example:
sudo ip addr flush dev enp4s0
sudo ip addr add 192.168.10.2/24 dev enp4s0
sudo ip link set enp4s0 upVerify:
ip a show enp4s0IP addresses and subnet masks can also be set manually through the Ubuntu network configuration GUI.
Identify the Ethernet interface on the scanner PC and assign a static IP in the same subnet as the remote PC:
sudo ip addr flush dev <INTERFACE>
sudo ip addr add 192.168.10.1/24 dev <INTERFACE>
sudo ip link set <INTERFACE> upFrom the scanner PC:
ping -c 4 192.168.10.2From the remote PC:
ping -c 4 192.168.10.1Passwordless SSH is required for Tyger to connect to the remote node without user intervention. On the scanner PC, generate an SSH key pair and copy the public key to the remote PC:
ssh-keygen -t rsa -b 4096
ssh-copy-id [email protected]Press Enter for all prompts (no passphrase). Verify the connection from the scanner PC:
tyger login ssh://[email protected]:22/opt/tyger/api.sockVerify:
tyger run listThe scanner PC is now connected to the remote Tyger server and ready to stream reconstruction jobs.