RDM - madBeavis/PimpMyAtv GitHub Wiki

Foreward

This document describes random and not so random musings and such involving RDM. I also fracking despise doing route calculations, so I wanted automagically generated routes. I can understand fetishes, but like feet, route generation is one that confounds me but I do respect folks predilections and idiosyncrasies in pursuit of odd things as long as one doesn't hurt others.

Findy Pokemon

Findy pokemon is a mode that I wrote for RDM to find TTH of spawnpoints with minimal interaction. You set it up with a geofence that you are interested in finding TTH for spawnpoints. You then throw devices at it. The code queries the database for your area for spawnpoints that have a null TTH. It then makes a list, sending devices to the coordinates and when it runs out of coords it goes back to the start. In a quick attempt to increase the chance of finding TTH for spawnpoints, it requeries the databaes every 5min, which will drop out spawnpoints with found TTH and also keeps devices working over the start of the list, instead of the whole list, which should maximize the chance of finding TTH as it iterates over points towards the front of the list.

You will have to initialized the geofence area prior to running findy. A good initialization would be 4 passes across an area. I typically points a pile of devices at an area and let it go overnight. In practice 2 will work, assuming that spawnpoints are clustered. If they are clustered, when findy goes to a spot and finds new spawnpoints, it will work them into the data list on the next requery. If one did a quick initializations, do not be surprised if the amount of data points increase initially.

The following are some tuning points. You need to find how many seconds it takes for the devices to get data from a point. I use Atlas, so adb shell into it and cat /data/local/tmp/atlas.log and it will tell you the time. My system is at 3 seconds per hop. Knowing that TTH is only propogated in the last 90 seconds of a spawn, having one device one 30 points will find TTH in a rapid manner. So for starting an area, I would assign enough devices to give 50 points per monsters if you are able to monitor and reassign devices for efficient use as the unknown TTH goes down. If you are running findy over night, I would utilize a large area that won't run out of points to interrogate prior to you waking up.

The following are future considerations. I am not a micro-optimization person, I seek maximum benefit from minimum work.

  1. I could make the case for very rudimentary clustering. I would take the point found on the query from the db. From then, I would make circles 65m away from the point, with the first at noon, the second at 1:30, the third at 3:00 and so on. Then run the points from the db and see which one covers the most points, then put that into the list that findy utilizes.
  2. Limit the working list based off how many devices and spawns one has in the area. For overnight, I would put my pile of devices onto once massive area. Findy would still query the db to get the full list, but only put 50*device count into the working list.
  3. Configuration options for tuning to your particular situation. This doesn't seem to be a RDM paradigm, but I like to give folks choices to better suit their situation than my use case. But with power to change values comes the responsibility to know ramifications of such changes.
  4. Determine a way to integrate 60 minute timers. In practice, I have observed them getting found with 35+ min timers.
  5. Smarter determination of spawn times instead of just running over points over and over until TTH is found. I could envision it where linked data in a list for minutes in the hour for each spawn is maintained. When a device has visited the spot and not seen a monster there, it puts data into the linked list that it has visited that spot in that minute. When the minute changes, it starts at the beginning of the list, checking each data point that has not been visited in that minute. For this to work, I would have to utilize some swift caching code that can be serialized and/or put into db to persist across start/stops of RDM.

Jumpy Pokemon

This mode was designed to yield 28+ minute timers if one has it setup properly and assigns enough devices. I do not spoof, so I have to drive to spawns, therefore I require good timers. Circle modes are good for some, such as those that are device poor (want more spawns per device) and don't mind 20min timers.

I layed out my setup as follows. I layed out fences and fired up jumpy at it. This will tell you how many spawns you have for the area. Given that Atlas takes 3 seconds per hop for me, I can do 1200 an hour assuming they are perfectly spaced. Since this is not necessarily true, I assign one device per 1k spawnpoints. This will yield 28+ minute timers for 30 minute spawnpoints. 60 minute spawnpoints are not considered such, as I am not aware the RDM monitors or determines such. Of course, do a good initialization for the area, as this mode will visit spots at specific times and may not randomly find TTH for points in the geofence that are missing TTH.

The following are future considerations. I am not a micro-optimization person, I seek maximum benefit from minimum work.

  1. Some basic form of clustering. I rely on raw speed of Atlas and a simple list to cover more ground and collect more data than competitors.
  2. Add capacity to RDM for 60 minute spawns.
  3. Configuration options for tuning to your particular situation. This doesn't seem to be a RDM paradigm, but I like to give folks choices to better suit their situation than my use case. But with power to change values comes the responsibility to know ramifications of such changes.

V2

Both

  1. add field to spawnpoint table to track more information, in this case it will be spawn time info so that rdm can determine which spawns are 30min and 60min. this is not automatic nor would it work well using findy to determine, some would get done during init and remainders would be found using a route (so that visits to spawn would happen at various times through the hour, findy would only visit during specific times). it simply does a bit for each quarter of the hour, so if mon was seen in all 4 quarter hours, it is a 60min and if seen in less, it is a 30min spawn. the data size will be larger than i need to use for my purposes, so others can use it in the future. i set to 32bit unsigned int, but am open to changing that to 16 bit.
  2. add a created field to spawnpoint. this would allow easier tracking and to actually know when the spawnpoint was created, so in the case of spotlight hour ones, they can simply be deleted.

Jumpy

  1. add config values:
    1. default lat/lon in case something gets wonky, mad created enough spawns at 0,0. i would use coords of someone's house i don't like, so it would look like they appear to be rubber banding back to their house.
    2. make it so query to get points uses last_seen field, will default to 7200min. in case you have event points in db, they would not be included in the data for jumps. use a value of -1 to include all spawnpoints regardless of how recently seen.
    3. set how often the db is requeried, default to 1hr
    4. set minimum spawn time, will default to 25min. anything less, i think you should use routes.
    5. set time on how close the algorithm will get to calculated spawn time, ie buffer time, defaults to 20sec. this is to avoid overrunning spawn times
    6. set how long algorithm sleeps when it gets to close to the buffer time, defaults to 10sec. if it gets close, the mode will sleep for that time to backoff overrunning the actual spawn time

Findy

  1. swapped over to clustering algorithm based of kneckter's python code on github. no route calcs, as atlas doesn't care.
  2. status now shows position in route, clusters and total spawnpoints
  3. added debug code to show when tth found for a spawnpoint
  4. premise in setup is that it starts with large cluster size and works through them until there is just clusters of one
  5. added user config values:
    1. allow user to set how often to requery the db, defaults to 90sec, which is the time of tth existing for a spawn
    2. provide a method for user to sort data how they please, with it defaulting to random (aka shotgun). the user can tune the way they want to sort the data to control how it works. this will only be done for the first query upon instance startup. this is somewhat negated by clustering.
    3. max query count - how many points you want db to feed the algorithm. no point having 10k points if you have 1 worker. for my 100 atv, they can see 30 spots in 90sec (tth window), so 3k/90sec for my 100atv. so will default to 3k. obviously lower values will speed up the clustering.
    4. set starting cluster size, defaults to 8. so the algorithm will create route points with less clusters that consist of large points per circle, so hopefully tth found quicker. as the data is whittled down, this cluster size will reduce down to minimum of 1. if the user puts too high of a value, say 20 and there is none with 20, it will iterate to lower the cluster size so that there is at least 1 point in route (if possible, which may not be in case all tth are found for the area)
    5. add logic to lower cluster count per circle, so that coords not passed over multiple times in query window. will be calculated off 1.25 times the hops in last interval. so if 50 hops were seen last interval, it would drop the cluster size when less than 75 points were in the route.