Plan Mode - Trustmania/PGO-mapscan-opt GitHub Wiki

Plan Mode

This can be used to scan large areas for spawnpoint data. First you need to create plans, there are 4 different formats for this, 'seikur0_s2', 'seikur0_spir', 'seikur0_circle' and 'raw'. Now you may already wonder, why I named these formats after myself. The reason is simple, with this planning upgrade I meant to create a standard for large scale spawn point scanning. For this I created a very efficient grid covering the earth with circles in a very efficient way, so these three formats use this grid and thus it's very specific.

New plans are stored in the subdirectory res/learning/learn_plans. Create a subdirectory for your worker group id number if one doesn't already exist. Workers with id 0 will execute plans from res/learning/learn_plans/id0. Plan files are JSON format, but they are saved with the extension .plan. The filename doesn't matter:

seikur0_s2:

{
 "type": "seikur0_s2",
 "token": "47b729"
}

It has the type, the token as google s2 cell token (this one is level 10). So it'll scan all grid points within that s2 cell.

seikur0_spir:

{
 "type": "seikur0_spir",
 "location": [53.0,8.0],
 "range": 30
}

It has the type, the location as latitude/longitude in degrees and the range in hex layers.

seikur0_circle:

{
 "type": "seikur0_circle",
 "location": [53.0,8.0],
 "radius": 3300
}

It has the type, the location as latitude/longitude in degrees and the radius in meters.

raw:

{
 "type": "raw",
 "id": "something_unique",
 "locations": [[53.0,8.0],[53.1,8.1],[53.2,8.2]]
}

It has the type, the id (used to have a unique filename of the learning file) and the scan locations, that you'll actually scan.

subplans:

For all these types the two entries subplans and subplan_index can be added to mark the plan as being split into multiple parts, example:

{
 "type": "seikur0_spir",
 "location": [53.0,8.0],
 "range": 30,
 "subplans": 2,
 "subplan_index": 1
}

with the example being part 1 of 2. By using that you can divide it up into different runs to fit your needs or your number of workers.

Automated Plan Creation

For now I included a tool, spawnview.py, which can be used to click level 10 s2 cells in a map to select your planned scan area. If you start at an existing marker, you can hold down the left mouse button and pull to add cells in an area. Then at the end you can right-click and save the plans. You'll also be able to select into how many parts you want to split each level 10 cell.

Tips: When saving the plans a popup window should appear asking you how many parts you want to split the cell(s) into, if no popup is shown your browser or a browser extension may be blocking the popup. Disable popup blockers or try another browser.

1 part needs ~160 workers, which is above the softban limit for most people.

2 parts need ~80 workers and so on

Now the plans will be saved in the res/learning/learn_plans/new folder. To use the plans with the id0 profile move them from that folder into the id0 folder, same with other ids. Then start the scanner as described in the Executing Plans section. It'll start working through all these plans at a rate of ~one plan/hour, slightly more.

Plan Mode Example

How to create plan files to generate learning files

In the res/usersettings.json file you will have a lot of workers expressed in a json format, like this. However for the sake of simplicity of the example the json array only contain one user.

"profiles": [
    {
        "id": 0,
        "type": "ptc",
        "username": "ptcAccountNr1Name",
        "password": "ptcAccountNr1Password",
        "proxy": "",
        "coordinates":
        {
            "lat": X,
            "lng": Y
        },
        "learn_file": "main"
    }]

The property learn_file refers to a json file containing spawn points detected by the planning phase in the following location res/learning/learn_files/. The complete path is therefore res/learning/learn_files/main.json. When main0.py is started in iscan mode (without the -p parameter), it will load the main.json file and begin intelligent scanning. This will not work until you have generated the data file using plan mode.

Create your plan file(s) using one of the above formats and save the plan(s) to res/learning/learn_plans/id0. For example, I divided one lvl 10 s2 cell into three parts with the Automated Plan Creation tool, and it spit out 3 files into the following path: res/learning/learn_plans/new/ with these names:

  • 47b729_1_3.plan
  • 47b729_2_3.plan
  • 47b729_3_3.plan

Remember that the <learn_file>.json had a token property with the value 47b729? That value refers to the first part of the file names that are common to all of them.

Now copy these files to the following path: res/learning/learn_plans/id0/. Note that the example had its worker id set to 0, change the number to match your needs...

How to build the learning files (using the plan files)

Running python main0.py -p -id # will instruct it to run in planning mode and execute the plans stored in res/learning/learn_plans/id#/, where # is the id of the worker group you want to start. When a scan completes, the spawn point data will be written to res/learning/learn_files/. The run will continue until all plans in the subdirectory are complete, and create one learning file for each plan file.

How to use the learning files created (using the plan files)

In the directory res/learning/learn_files/ you will find all the learning files created. If you want to use a single learn file just reference that file name <learn_file>.json without the .json part in the main profile's json object.
<learn_file> is a substitute for any file name, example: seikur0_s2__463a37__1_3__70.0.json (but not a good name, so you should probably rename it)

"profiles": [
    {
        "id": 0,
        "type": "ptc",
        "username": "ptcAccountNr1Name",
        ...
        "learn_file": "<learn_file>"
    }]

However if you have multiple learning files created by the Plan Mode, you need to merge them into one file with the spawnfix.py python script located at the root of the project. Place the spawnfix.py file together with only the json files you want to merge in a directory and execute the script with python spawnfix.py. It should generate a mapdata_rename_this.json file which for obvious reason is the file you need to rename and reference in the main profile's json object, and don't forget to put it in the res/learning/learn_files/ directory.

Quality of the scan files

From now on the learning files will also include some parameters about the scan, with the goal of knowing instantly if it went as it should.

empty: Tells you how many areas were empty for each run, if it suddenly increases by a lot, this can indicate softban issues.

emptynum: Tells you how many areas were empty during all runs, shouldn't be too far away from the empty values.

totalnum: How many areas were scanned as part of this file

workers: How many workers were used

retries: Tells you how many retries were used for each run, a cell which first comes up as empty and after a rescan not anymore is a retry (false negative). Some of these are normal.

compromised: Tells you if the time distance between two scans for an area was above 15 minutes at any point, leaving the scan unable to safely detect the spawn point type.

undefined: Tells you how many undefined/-1 type spawn points you got at one glance.

runtimes: Tells you the times each run needed. Should be below ~12 minutes.

Help wanted

If you complete seikur0_s2 type plans or if you have any learning files created with my tool in general, I'd like to have them, so I can collect them and create a larger database. So if possible I'd like you to send me these files per mail: [email protected] I'll share them somewhere, maybe Google Drive, so everyone can access them. Thanks :)

Why you should prefer s2 plans over the other formats

The s2_seikur0 format combines the google s2 cells with my optimized grid. So yeah first of all, s2 cells are created by google and they know what they're doing. The Pokemon GO app itself relies heavily on s2 cells for all kinds of things like spawnpoint placement, fetching map data from the servers and even biomes. S2 cells are basically a rectangular grid, that covers the whole earth. The genius thing about them is, that they are a) scaleable b) easy to calculate with and c) account for spatial distortion due to the spherical shape of the earth.

There are 24 rectangular level 0 cells covering the whole earth. It may look like some regions are not covered or like the cells around the equator have irregular shapes, but that's due to the perspective. And that's also precisely the reason why s2 cells are so useful. We want to divide an area up into other areas of equal size, which is easy to do on a plane, but very hard on a sphere. level 0 cells

Now each of these splits into 4 level 1 cells, each level 1 cell splits into 4 level 2 cells and so on up until a maximum of level 30. At that level every cell covers less than a square cm. It's very easy to calculate the parent cell for each s2 cell, so given a level 30 cell you can easily find the corresponding level 20 cell it's in. Now for every latitude/longitude point on the earth you can easily determine the cell it belongs to and due to the levels you can do that at an arbitrary scale.

If you have a complete level 10 cell scanned rather than a circular or hexagonal one and somebody in that same area wants to do scanning, he doesn't need to ask, where exactly you put your scan area and at which radius. And how the f he's gonna put his own area next to that without overlay or waste while also minding the spatial distortion due to the spherical shape of the earth. He's just gonna scan the neighboring level 10 cells in his area, that you haven't scanned yet.

TL;DR: If you want to share your spawn point data, work together with other people at scanning an area or want to contribute to a global database of spawn points, you better use s2 cells.

⚠️ **GitHub.com Fallback** ⚠️