AppUsage - haimeh/finFindR GitHub Wiki

Getting Started with the App!

Launch the app with the icon placed on your desktop by the installer. You will be presented with the home screen: alt text
Looking on the Left hand side you will see the control panel. Depending on the process needed, there are 5 different behaviors to chose from:

  • Rdata: Loads a previously saved catalogue
  • Image: Begins processing a new catalogue from a directory of cropped images
  • Field: Preprocesses raw field images by cropping
  • Label: Applies labels to traced images via csv
  • Rename: Assigns new image names via csv

There are also two features important to saving the session in Rdata:


As you work with and view individual query images in either the Matches or Clusters tabs, you may edit the data associated with a given image:

  • Modifying Data: How to assign a label or adjust the trace of the trailing edge

Processing your first catalogue

To begin, provide a Query Directory containing the images you wish to perform matching with. Select the Image input type and click the Trace Fins button. finFindR will begin extracting discerning features from the trailing edge of the dorsal fin to use for matching. Once this is complete I recommend that you Save Query Rdata.
This process is only done once for each catalogue, afterwhich you would instead load a previously processed image directory using Rdata.

Once the fin tracing is complete(or you load a saved catalogue) you can use the Clusters Tab(see below) to see an estimate of which images share similar dorsal fin features.
You may consider reviewing some of the traces (see Modifying Data) in order to make sure that the traces all appear along the TRAILING edge of the fin (finFindR works best with close cropps of the dorsal fin).

To get more precise match information, you will need to provide a Reference Directory of previously processed images, ie. a directory of images with a finFindR.Rdata file saved. Once you have Rdata completed for multiple catalogues (meaning, you already used Image on each and saved) you start by selecting Rdata as the input type, then you provide one directory to the Query Directory and click Load Rdata then provide another directory to Reference Directory and click Append Rdata. finFindR will begin the comparrison (see Matches Tab below).

Comparing Fins

Matches Tab

When you have data loaded into both the Query and Reference boxes, the app will compare each individual in the Query Directory against all individuals in the Reference Directory. A progress bar will appear in the lower right corner.
When this process is complete, and after a short pause, a table is shown below the two windows where fins can be displayed.
Cells from the table are clicked to show a pair of images.
alt text
Each row represents an individual image from the Query directory, shown on the left.
Each column represents a potential match from the Reference directory, shown on the right. Columns are ordered by proximity of match from left to right.

The Rank selection, lets you choose how many potential matches you want to see in the table. This will also be the number of columns saved in csv generated from the Download button at the bottom of the table:
The table offers 3 ways the data can be viewed.

  • DistanceTab shows the distance under the metric generated by the neural network for grouping individuals. Smaller numbers indicate better (closer) matches.
  • IDTab shows the id associated with a given cell. Selecting the 1 Per ID checkbox limits all tabs in the table to only show the best match for each unique ID.
  • NameTab shows the image name for each cell.

The user selection is kept synchronized between all the Table tabs.

Clusters Tab

This can be a convenient way of looking at how the neural network is grouping individuals without requiring a completed reference catalogue. Once images are loaded from either the Query directory, Reference directory (Or both), a table is populated, indexed via an estimate of how individuals are clustered by the neural network as well as a visualization of the neural network output.
Each row represents an individual. Rows can be clicked on to render the individual represented by that row: alt text
You can select multiple individuals by clicking up to 4 rows to render fins in 4 windows. In between selections of individuals, you can click the lock chekcbox in the upper right corner of a window to keep it open for comparison.
You can close a selection either by clicking the row again, or using the close button in the top right corner of a fin window.
Use the search bar to subset by contents of the row names.

⚠️ **GitHub.com Fallback** ⚠️