Debugging Voxel Globe (Developer) - ngageoint/voxel-globe GitHub Wiki

There are many moving pieces in Voxel Globe to debug, such as

  • The client
  • The web server
  • The Job Queue (known as Rabbit MQ)
  • The database
  • Each individual processing job
  • And more...

How to get a debugging into each stage will be covered in the following sections

Debugging Javascript on the Client

See firebug/firefox build in Development tools. Sometimes using Chrome is also useful

Debugging CSS on the client

TODO

Debugging the webserver

TODO - VIP_DJANGO_DEBUG, VIP_DJANGO_TEMPLATE_DEBUG, VIP_HTTPD_DEBUG_INDEXES

Debugging the job queue

TODO - Flower, rabbitmq http interface, and basic CLI commands, and maybe python amqp tricks?

Debugging the database

TODO - Django admin portal and pgadmin3

Debugging celery tasks

Debugging a celery task can be hard at first. Some tools have been included to ease this process.

vsi.tools.vdb - Debugging in Python

Some VSI Debugging (vdb) tools have been written to get a debugger into the actual processing task. The typical debugging approach won't work for a number of reasons

  1. You can not attach a debugging to a python process and wait, as celery tasks will start up new python instances on demand. It's fairly difficult to attach to an already running python session too.
  2. pdb.set_trace() is only useful when you have direct stdin/stdout access. A celery task is buried deep in the celery worker daemon and is piped to log files
  3. Calling tasks synchronously (celery.task.apply) becomes the only way left to call a task, and involves duplicating what voxel globe already does for you each time.

Two different debuggers are wrapped in vsi.tools.vdb to give you debugging access to a python program over the network.

  1. rpdb - The rpdb library is wrapped in vsi.tools.vdb_rpdb to give your telnet access to the original pdb library. The default is to server on port 4444. It's a bit crude, but works for simple debugging
  2. rpdb2 - The rpdb2 library is wrapped in vsi.tools.vdb_rpdb2 to give you network access to a debugger simliar to pdb, but is actually running in pythons built in profiler instead. This has some speed benefits and can also handle multiple threads and can follow (one process only, but you get to pick with process) multi process forking. You can debug the program using either the rpdb2 command line tool or the winpdb GUI tool.

Example

import vsi.tools.vdb_rpdb2 as vdb
vdb.set_trace(_rpdb2_pwd="synsepalum dulcificum")

Start up winpdb, using the same password, and attach to your program where the set_trace left off.

just build_winpdb
just winpdb

To automatically break on the start of a task, set the environment variable VIP_CELERY_DBSTOP_ON_START to a regular expression for the task name (.* for all tasks). The regex matches the task name, for example

voxel_globe.create_site.tasks.create_site
voxel_globe.event_trigger.tasks.event_trigger
voxel_globe.filter_number_observations.tasks.filter_number_observations

This will dump you into the celery prerun section. Step up celery/app/trace.py and step in the -*- TRACE -*- try section where fun is called, then step in self.run in __protected_call__. At this point, you should be in your celery task.

Sometime set_trace isn't convenient. Another useful trick is vsi.tools.vdb_rpdb.dbstop_if_error/vsi.tools.vdb_rpdb2.dbstop_if_error which will automatically attach a post mortem debugger after an uncaught python exception. This is useful for only entering the debugger for failed tasks (albeit after the program has crashed and can no longer be recovered).

Celery has an on_failure method that is called when ANY celery task raises an uncaught exception. To automatically trigger a rpdb post mortem debugger on failure, set the environment variable VIP_CELERY_DBSTOP_IF_ERROR to 1. This is typically set in the local_vip.env file. Setting to 1 will start a pm debugger on any failed task, without having to change any code. This is good for when you aren't expecting any crashes, but are ready to debug them. You shouldn't leave this on all the time, as failed tasks will pile up waiting to be debugged and never end.

Running a task locally

It is possible to run a task in a python session, using the apply function to run a task synchronously. A task function should never be run directly! All voxel globe tasks are meant to be bound to a celery task and running it unbound (directly) will result in error. The /apps/task/status/{taskid} endpoint can be used to view status on synchronous task calls.

Example

#Normally handled by celery-django
import django
django.setup()

#Normally run by celery.py
import boxm2_register
boxm2_register.smart_register = True

#Create a service object for the database
import voxel_globe.meta.models as models
si = models.ServiceInstance(serviceName='debug')
si.save()

#Setup logging you you see all the logging messages
import celery.utils.log
celery.utils.log.base_logger.setLevel(celery.utils.log.logging.DEBUG)

t = tasks.generate_error_point_cloud.apply(args=(voxel_world_id, threshold, None), task_id=str(si.id))

Debugging C++ (Currently Notebook/Celery workers only)

It's a little harder to get gdb attached, but there are a number of methods

Method 1

  1. Add export VIP_GDBSERVER=1 to your local_vip.env file
  2. Determine the pid of the target process. Either os.getpid() in python, getpid() in C/C++, or some other form of inspection. A more complicated approach is to add a python set_trace (See above) just to determine the PID and pause the python before the C++ starts to set a breakpoint.
  3. Start gdbserver in multi mode. This means you can attach to any pid in the container. Run ./just gdbserver
  4. A file gdbinit is auto generated in the base repo directory. This is a list of commands that need to be run when starting gdb. You can use gdb -e {voxel_globe_repo_dir}/gdbint if using pure gdb
  5. Connect your favorite gdb debugger to remote gdb, using localhost:{port#} printed by the ./just gdbserver command. (If you are attempting to use a different computer to debug with, you would replace localhost with that computer name running the container [where you ran ./just gdbserver], but this is most likely NOT your situation)
  6. Eclipse example. Start Eclipse (with CDT installed)
  7. Run-> Debug Configuration...
  8. Select C/C++ Attach to Application. Yes, this is for local debugging, the gdbinit file will trick it into thinking remote is local.
  9. Create New Launch Configuration and name it
  10. Under Debugger Tab, In the Main sub-tab under GDB command file enter {voxel_globe_repo_dir}/gdbinit. The gdbinit gets updated by the ./just gdbserver command every time with the new port number, so you don't have to worry about that
  11. You no longer have to (but can if you want) Go to the source tab and add {voxel_globe_repo_dir}/external to get the source files to break correctly. This is also handled by gdbinit now
  12. Click Debug. From now on you can just run Launch without having to reconfigure it again.
  13. A PID list should pop up listing all the pids in the container. Select the one you want to debug, and click the OK button
  14. You must be in the debug perspective, and have the console up, in gdb (not gdb-traces), and type the share command in. This will cause all shared libraries to be loaded. Now breakpoints and debugging should work as normal

Method 2

Unlike Method 1, gdbserver is run against a specific pid. Method 1 should be superior, but just in case

  1. Add export VIP_GDBSERVER=1 to your local_vip.env file
  2. Determine the pid of the target process. (See Method 1)
  3. Start gdbserver in normal attach mode. Run ./just gdbserver2 After selecting the container you want to run gdbserver in, a list of pids running in that container will be displayed. Enter the number for the pid you want to attach to
  4. A file gdbinit is auto generated in the base repo directory. This is a list of commands that need to be run when starting gdb. You can use gdb -e {voxel_globe_repo_dir}/gdbint if using pure gdb
  5. Connect your favorite gdb debugger to remote gdb, using localhost:{port#} printed by the ./just gdbserver2 command.
  6. Eclipse example. Start Eclipse (with CDT installed)
  7. Run-> Debug Configuration...
  8. Select C/C++ Remote Application
  9. Create New Launch Configuration and name it
  10. Under the Main tab, a C/C++ Application must be specified. Anything will do, try /bin/ls. This is just how the Eclipse CDT is hard coded to behave.
  11. Under Debugger Tab, In the Main sub-tab under GDB command file enter {voxel_globe_repo_dir}/gdbinit. The gdbinit gets updated by the ./just gdbserver2 command every time.
  12. Under the Connection Sub tab, you will need to manually update the hostname and port number every time you restart a container.
  13. You no longer have to (but can if you want) Go to the source tab and add {voxel_globe_repo_dir}/external to get the source files to break correctly. This is also handled by gdbinit now
  14. Click Debug. From now on you can just run Launch. You will only have to reconfigure it every time the port number changes (or you switch which container you are debugging, since it will be a different port number)

Method 3

Less Preferred

  1. Modify C. Add

     #include <signals.h> //At top of file
    
     printf("%d\n", getpid());
     raise(SIGSTOP); // Where you want the debugger to attach
    

    Or c++

     #include <csignal> //At top of file
    
     cout << getpid() << endl; 
     raise(SIGSTOP); // Where you want the debugger to attach
    
  2. Run ./just vxl restart celery

  3. User either Method 1 or Method 2 to connect to gdb server

Method 4

  1. Add export VIP_GDBSERVER=1 to your local_vip.env file, or else you won't have vxl_src
  2. Run just gdb. This will start gdb inside the docker in tui mode
  3. Select the process to attach to
  4. Debug!

Printf debugging

Ideally, printing in a piece of python/C++ code should output to the logs. This is sometimes not direct, trivial, or instant, making debugging difficult. This section goes over each program to explain the caveats

  1. Nginx

    There is no custom code running on nginx. There is a 3rd party mod needed to echo in an nginx config, not included in voxel_globe

  2. Asgi (runworker)

    stdout and stderr should both just work in python and c. stderr always just works. stdout was buffered, but with the -u flag added, it should work too now.

  3. Uwsgi

    stdout and stderr should both just work in python and c

  4. Daphne

    stdout was buffered, but with -u flag added, it is now unbuffered. Currently no voxel_globe code runs in Daphne, so there is no way to print in it, but there is no need to either. If there ever is, stdout and stderr should work

  5. rabbitmq N/A

    RabbitMQ is a black box, soon to be replaced by redis

  6. Notebook

    Notebook is tricky. Python python stdout prints in notebook, and python stderr prints in the notebook, in red. C stdout, and C stderr print out in the logs.

  7. Redis N/A

    Redis is a black box. There is a "redis desktop manager" useful for exploring redis with. just build_rdm rdm

  8. Flower N/A

  9. Postgresql N/A. There are ways to print out ever SQL query in Django

  10. celery

    Python stdout and stderr are redirected to the logging via a LoggingProxy. There is no good way (and many bad ways) to capture C stdout and stderr. One such way is using vsi.tools.redirect.StdRedirect

    with StdRedirect(open(os.path.join(voxel_globe.tools.log_dir(), 
                                 self.request.id)+'_out.log', 'w'),
               open(os.path.join(voxel_globe.tools.log_dir(), 
                                 self.request.id)+'_err.log', 'w')):
    
⚠️ **GitHub.com Fallback** ⚠️