MagNIMBUS magnetometer data processing - ugcs/GeoHammer GitHub Wiki


Processing magnetic survey data might seem overwhelming at first, but with tools like UgCS GeoHammer, it becomes a much simpler task, even for beginners. In this article, we’ll walk you through the step-by-step process of handling airborne magnetic data, starting from importing your data to processing it and generating detailed maps.

Whether you’re new to geophysical surveys or looking for a more efficient way to manage your data, UgCS GeoHammer provides an intuitive platform to streamline the entire process. By following this guide, you’ll learn how to quickly transform raw magnetic data into meaningful results with ease.

Data used in this tutorial can be downloaded here.
Data was gathered using a MagNIMBUS atomic total-field magnetometer over the SPH Engineering's test range with buried pipes and barrels. Comprehensive analysis of this survey "Ground versus UAV-based single-sensor magnetometer comparison" can be read here.

Flight parameters: Hardware:
Flight altitude 1.5 m from altimeter MagNIMBUS
Flight altitude 0.5 m from the sensor DJI M350 RTK drone
Flight velocity: 4 m/s SkyHub onboard computer with TTF system and RTK GPS


  1. Use the “Open files” toolbar button or drag and drop the required file for processing.

image

  1. Use the “Select Area” button to choose desired survey area and click “Apply Crop”.

image

  1. Choose “TMI” in data processing zone for further data processing.

image

  1. Before applying the filters, we can grid the raw data to check if the data was recording correctly and we aren't missing any lines.
  • First, we need to enter cell size for our grid. Choosing a number too little will take a lot of time as it needs to generate very small cells, therefore, depending on the survey size and type it's advisable to start with cell size 1/4th of the spacing between lines and gradually decreasing it. Lower cell size gives higher resolution but take more computing power. For our data set we chose the cell size 0.1.
  • Second, we need to enter blanking distance. Blanking distance should be sufficient enough to cover areas between flight paths to make up for no-data zones. As our flights had 1.0 meter spacing between lines, we can enter 1 for complete blanking for our grid and click "Apply".

image

By gridding it now, we can make assumptions, of what filter values we should apply to our dataset, such as lag-correction.

  1. First step for processing is to apply Low-pass filter. For our dataset we will apply 50 fiducials of cut-off wavelength and click “Apply".

image

  • When Low-pass filter is applied, you can look at the filtered data for each profile with use of the tools from toolbar:

image

  • Dashed lines means the raw, unedited data and straight line represents the Low-pass filtered data. Note, that the picture has big vertical scale exaggeration.


  1. Next step is to perform GNSS time-lag correction. After applying different values, conclusion was made, that value of -14 worked the best.

image


After applying GNSS time-lag correction, we can compare our grid with and without LC filter to clearly see it.

No time-lag correction Applied time-lag correction
i 1


  1. After applying time-lag correction, next step is to apply Running median filter. Using this filter will give us the amplitude of local magnetic anomalies relative to median value. We'll enter a window size of 5500 (covering the length of the area).
  • Note, that applying smaller window will also make noise more prominent.

image

  1. Now that we have created residual anomaly data, we need to grid it. Choose the TMI_anomaly field in the slider menu and apply Gridding with previous conditions. Now that we only have residual anomalies, we can change the Range slider option in Gridding settings, to choose high, low or all available values.

image

  1. If everything is done correctly, you should get the final Residual Anomaly map. For better visual presentation, you can disable flight path by selecting GPS track layer image.

image

  1. Next step is to pick anomalies that are potentional targets considerable for further investigation. First find an anomaly that has strong enough signal to be worth checking out, double left-click on the center of an anomaly to set point and choose "Create mark" image in the toolbar. Do this to all identifiable anomalies. By using the slider option for Gridding, we can get even better Residual Anomaly map for further data visualization. After all the anomalies have been picked, it should look like this:

image

  • Another way to pick anomalies is by using the center graph and individually looking through each line, where same principle can be applied:

image

  1. After you have picked all the neccessary anomalies, you can export them as an .kml file if you need to upload them to any GIS software or look at them through Google Maps or Earth, or you can export the map as a georefferenced .geotiff image.

Note, that this option is only available for licensed software!

421281853-0fd26b06-eb75-4024-a343-fcfd28f775e9

  1. As a last step, you can do “Quality control” to check whether the data acquistion has been done without any high margin errors for flight path. By knowing the distance between lines of each flight and the flight altitude (AGL) we can look for discrepancies between flight lines and their error amplitude as tolerance.

image

  • Just by checking out a smaller portion of survey area, we can clearly see that the flight path has almost at all times been within error margin. Red line shows the area where tolerance was not met.

image

⚠️ **GitHub.com Fallback** ⚠️