Parameters - ju1ce/April-Tag-VR-FullBody-Tracker GitHub Wiki

Below are short descriptions of the parameters you can set.

When changing parameters, make sure that you press save or they will not take effect! While some changes will work immediately after saving, you may need to restart tracking for others (press the Start button on the camera tab to stop, then press Start again to start. You do not need to recalibrate playpace after doing this).

Always change one parameter at a time, so if ATT stops working correctly at any point you know what parameter is to blame!

Camera parameters

Here are all the parameters for opening the camera and its settings.

Ip or ID of camera:

If you have a webcam or OBS, this value will be a number, usually 0, 1, or more if you have more cameras connected. Best way to figure out the correct index of the camera is to try them: Type in 0, press save, go back to Camera tab, check preview camera and press Start/Stop camera. If the correct camera shows up, great, you can go to the next step! If not, repeat the process with 1, then 2 etc until you find it.

If you use IP Webcam, you should enter your IP address, the same one as you used in your browser but ending with /video. The field should look something like this: http://192.168.1.100:8080/video but with some diffrent numbers.

Camera API preference:

The API to use when opening camera. For USB cameras, you may want to set it to 700 (DirectShow), as it allows you to use Open camera settings parameter and set exposure manualy. For IP webcam, leave it at 0.

Rotate camera clockwise/counterclockwise:

This will flip the camera view 90° in wanted direction. This will enable you to stand closer to the camera, which is usefull if you dont have much space or you have a low resolution camera (640x480). If you use a PS eye, you should use this. You can also check both to rotate camera 180°.

Camera width/height:

You can usually leave this on 0 and the program will automaticaly determine the correct width and height, but you may want to set it manualy to ensure camera opens at correct resolution.

Camera FPS:

The FPS of your camera. If you want to use a 60fps camera, set this to 60.

Open camera settings:

When you open your camera, having this checked will attempt to open the window for setting hardware parameters of camera. This can be used to set exposure on most cameras - set it to -8, or at least -7 if that is too dark.

Will only work if Camera API preference is set to 700!

Enable last 3 options:

Try to set autoexposure, exposure and gain of the camera from the next 3 options. It is suggested that you try the Open camera settings parameter first, but this way may work in some cases where that doesnt.

  • Camera autoexposure: enable or disable autoexposure. In most cases, you want it to be 0 to disable it.
  • Camera exposure: set the exposure, if supported by the camera. You want this on -8 (or at least -7 if that is too dark).
  • Camera gain: set the gain. Usualy values 0 to 255. You probably want something high, so start with 255.

Tracker parameters

Parameters relavant to your physical trackers and their detection.

Number of trackers:

The number of trackers you wish to use. For full body, you have to use 3. You cannot use fullbody in VRchat with just 2! If you want to use owotrack for a hip tracker, you still want to use 3 here, just check Ignore tracker 0 to ignore the apriltag hip tracker.

Size of markers in cm:

Measure the size of your printed markers in cm, and input the value here. Measure the white square, like this:

marker_measure

Quad decimate:

This is the quality setting. The value can either be 1, 1.5, 2, 3 or 4. The higher is this value, the faster will the tracking be, but the range will decrease. It is dependant on the camera you use. In general, you will probably have to use 1 on 480p, 2 on 720p and 3 on 1080p. You can fine tune this parameter later based on the performance you are getting. (If you get high FPS, you can decrease it. If your trackers dont get detected well, increase it.)

Search window:

To increase performance, the algorithm only searches for trackers in a window around the position they were last seen in. This parameter sets the size of the window. Lowering this value will make the windows smaller, which makes the program run faster, but increases the chance you move the tracker outside the window which will cause it to not get tracked anymore.

The window is visualized with blue circles. The tracker must be inside at least one window or it will not be tracked.

Marker library:

The library of markers that you wish to use. We suggest you use the original ApriltagStandard library unless you know what you are doing.

The ApriltagCircle library works worse that standard in almost every way, but it may be useful in case of weird tracker configurations.

The Aruco library is less accurate and less robust than Apriltag, but works much faster and, in case of high-res cameras, allows for smaller markers.

If you use any of the non standard libraries, you also have to make your trackers out of the corresponding markers. Marker ids 0-44 is still tracker id 0, 45-89 tracker id 1 etc.

tagCustom29h10 Since v0.7.1, this library has been added to work with dodecahedron shaped trackers. These trackers are still experimental and will work worse if your calibrations arent perfect, so only attempt them if you are very familiar with ATT! The files are pinned in the dev-talk channel on the discord, and you have to set "markersPerTracker" to 11 in config.yaml.

Use centers of trackers

By default, trackers in SteamVR will follow the main marker on each tracker. This will move it to the center instead. This can reduce some jitter, so it is recommended to set to true.

Ignore tracker 0:

This will cause tracker 0 to not be tracked. Use this if you want to replace the hip tracker with a vive puck/owotrack. Keep number of trackers at 3.

Smoothing parameters

This section refers to all parameters related to smoothing the 3d position of trackers. Dont forget to check the Refining parameters page for a more in depth descriptions and visualizations!

Smoothing time window:

Most of the smoothing is done with linear interpolation using values in a time window. The bigger the window, the more past values will the algorithm use and the smoother will movement be, at the cost of increased delay. 0.5 seconds works fine, but if that isnt responsive enough, you can put it lower. 0.3 can work fine, or even 0.2 in some setups, but putting it too low may cause detection to break.

Additional smoothing:

Additional smoothing done on top of the linear interpolation. Aditional smoothing is done using a leaky integrator, with the formula: current_position = previous_position * value + tracked_position * (1-value). Using this can make movement appear much smoother at almost no cost of delay.

What this means is that the parameter is between 0 and 1, 0 meaning only using tracking data without smoothing and 1 meaning using only previous data. Decreasing this parameter will increase the speed, but also increase shaking. Experiment with diffrent values to find the sweet spot, but 0.7 is usualy a good starting point.

Depth smoothing:

When estimating pose of tracker, depth from the camera is less accurate. This parameter attempts to smooth depth more, in order to reduce shaking caused by wrong depth estimate when using multiple cameras.

Would not reccomend using with only one camera, but you can try values around 0.1 or 0.2 in case of multiple cameras.

Camera latency:

The latency of the camera you use. Will always be a very small value, in the range of 0-0.1. Setting this properly will reduce the delay of trackers, but can introduce some shakyness.

You usualy only need this with cameras over wifi, while small values such as 0.02 may help reduce delay a bit even with USB cameras