Debugging Voxel Globe (Developer) - VisionSystemsInc/voxel_globe GitHub Wiki
There are many moving pieces in Voxel Globe to debug, such as
- The client
- The web server
- The Job Queue (known as Rabbit MQ)
- The database
- Each individual processing job
- And more...
How to get a debugging into each stage will be covered in the following sections
See firebug/firefox build in Development tools. Sometimes using Chrome is also useful
TODO
TODO - VIP_DJANGO_DEBUG
, VIP_DJANGO_TEMPLATE_DEBUG
, VIP_HTTPD_DEBUG_INDEXES
TODO - Flower, rabbitmq http interface, and basic CLI commands, and maybe python amqp tricks?
TODO - Django admin portal and pgadmin3
Debugging a celery task can be hard at first. Some tools have been included to ease this process.
Some VSI Debugging (vdb) tools have been written to get a debugger into the actual processing task. The typical debugging approach won't work for a number of reasons
- You can not attach a debugging to a python process and wait, as celery tasks will start up new python instances on demand. It's fairly difficult to attach to an already running python session too.
-
pdb.set_trace()
is only useful when you have direct stdin/stdout access. A celery task is buried deep in the celery worker daemon and is piped to log files - Calling tasks synchronously (celery.task.apply) becomes the only way left to call a task, and involves duplicating what voxel globe already does for you each time.
Two different debuggers are wrapped in vsi.tools.vdb to give you debugging access to a python program over the network.
-
rpdb
- The rpdb library is wrapped invsi.tools.vdb_rpdb
to give your telnet access to the originalpdb
library. The default is to server on port 4444. It's a bit crude, but works for simple debugging -
rpdb2
- The rpdb2 library is wrapped invsi.tools.vdb_rpdb2
to give you network access to a debugger simliar to pdb, but is actually running in pythons built in profiler instead. This has some speed benefits and can also handle multiple threads and can follow (one process only, but you get to pick with process) multi process forking. You can debug the program using either therpdb2
command line tool or thewinpdb
GUI tool.
import vsi.tools.vdb_rpdb2 as vdb
vdb.set_trace(_rpdb2_pwd="synsepalum dulcificum")
Start up winpdb
, using the same password, and attach to your program where the set_trace
left off.
just build_winpdb
just winpdb
To automatically break on the start of a task, set the environment variable VIP_CELERY_DBSTOP_ON_START
to a regular expression for the task name (.*
for all tasks). The regex matches the task name, for example
voxel_globe.create_site.tasks.create_site
voxel_globe.event_trigger.tasks.event_trigger
voxel_globe.filter_number_observations.tasks.filter_number_observations
This will dump you into the celery prerun section. Step up celery/app/trace.py
and step in the -*- TRACE -*-
try
section where fun
is called, then step in self.run
in __protected_call__
. At this point, you should be in your celery task.
Sometime set_trace
isn't convenient. Another useful trick is vsi.tools.vdb_rpdb.dbstop_if_error
/vsi.tools.vdb_rpdb2.dbstop_if_error
which will automatically attach a post mortem debugger after an uncaught python exception. This is useful for only entering the debugger for failed tasks (albeit after the program has crashed and can no longer be recovered).
Celery has an on_failure
method that is called when ANY celery task raises an uncaught exception. To automatically trigger a rpdb
post mortem debugger on failure, set the environment variable VIP_CELERY_DBSTOP_IF_ERROR
to 1
. This is typically set in the local_vip.env
file. Setting to 1
will start a pm debugger on any failed task, without having to change any code. This is good for when you aren't expecting any crashes, but are ready to debug them. You shouldn't leave this on all the time, as failed tasks will pile up waiting to be debugged and never end.
It is possible to run a task in a python session, using the apply
function to run a task synchronously. A task function should never be run directly! All voxel globe tasks are meant to be bound to a celery task and running it unbound (directly) will result in error. The /apps/task/status/{taskid}
endpoint can be used to view status on synchronous task calls.
#Normally handled by celery-django
import django
django.setup()
#Normally run by celery.py
import boxm2_register
boxm2_register.smart_register = True
#Create a service object for the database
import voxel_globe.meta.models as models
si = models.ServiceInstance(serviceName='debug')
si.save()
#Setup logging you you see all the logging messages
import celery.utils.log
celery.utils.log.base_logger.setLevel(celery.utils.log.logging.DEBUG)
t = tasks.generate_error_point_cloud.apply(args=(voxel_world_id, threshold, None), task_id=str(si.id))
It's a little harder to get gdb attached, but there are a number of methods
- Download and install Visual Studio Code
- Start VS Code
- Go to
extensions, and type in
cpptools
, and make sure you select theMicrosoft plugin and install.
- File -> Open Folder, and select the voxel_globe directory
- In your
local_vip.env
setVIP_VXL_BUILD_TYPE
toDebug
orRelWithDebInfo
and build vxl (just vxl
). Also make sureexport VIP_GDB=1
is set and restart celery/notebook - (Optional) Set your breakpoints in vxl
- Edit your
.vscode/launch.json
file to include the following configuration (Don't forget to substitute{\_\_CONTAINER_NAME\_\_}
). I suggest making a new configuration and naming it differently for each container name
{
"name": "C++ Attach",
"type": "cppdbg",
"request": "attach",
"program": "/usr/bin/python",
"processId": "${command:pickRemoteProcess}",
"pipeTransport": {
"pipeCwd": "${workspaceRoot}",
"pipeProgram": "docker",
"pipeArgs": [
"exec",
"-i",
{__CONTAINER_NAME__},
"sh",
"-c"
],
"debuggerPath": "/usr/bin/gdb"
},
"sourceFileMap": {
"/opt/vip":"${workspaceRoot}",
"/vxl_src": "${workspaceRoot}/external/vxl_src"
},
"linux": {
"MIMode": "gdb",
"setupCommands": [
{
"description": "Enable pretty-printing for gdb",
"text": "-enable-pretty-printing",
"ignoreFailures": true
},
{
"description": "Auto load symbols when loading an .so file",
"text": "set auto-solib-add",
"ignoreFailures": false
}
]
},
"osx": {
"MIMode": "gdb",
"setupCommands": [
{
"description": "Enable pretty-printing for gdb",
"text": "-enable-pretty-printing",
"ignoreFailures": true
},
{
"description": "Auto load symbols when loading an .so file",
"text": "set auto-solib-add",
"ignoreFailures": false
}
]
},
"windows": {
"MIMode": "gdb",
"setupCommands": [
{
"description": "Enable pretty-printing for gdb",
"text": "-enable-pretty-printing",
"ignoreFailures": true
},
{
"description": "Auto load symbols when loading an .so file",
"text": "set auto-solib-add",
"ignoreFailures": false
}
]
}
}
- Start/Run your process in celery/notebook
- Goto debug and run the attach configuration in VS Code
- Choose the pid inside the container to attach to
- Debug like it's 1999!
Launching is very similar to attaching explained above. The container must already be running. The container is not launched, only the process is launched in the container.
- This time, edit your
launch.json
file to include (don't forget to fill in {__CONTAINER_NAME__}, {__PROGRAM__}, and {__PROGRAM_ARGS__} (comma separated).
{
"name": "C++ Launch",
"type": "cppdbg",
"request": "launch",
"program": "/bin/bash",
"pipeTransport": {
"pipeCwd": "${workspaceRoot}",
"pipeProgram": "docker",
"pipeArgs": [
"exec",
"-i",
{__CONTAINER_NAME__},
"sh",
"-c"
],
"debuggerPath": "/usr/bin/gdb"
},
"sourceFileMap": {
"/opt/vip": "${workspaceRoot}",
"/vxl_src": "${workspaceRoot}/external/vxl_src"
},
"args": [
"/opt/vip/wrap", {__PROGRAM__}, {__PROGRAM_ARGS__}
],
"stopAtEntry": false,
"cwd": "/opt/vip",
"environment": [],
"externalConsole": true,
"linux": {
"MIMode": "gdb",
"setupCommands": [
{
"description": "Enable pretty-printing for gdb",
"text": "-enable-pretty-printing",
"ignoreFailures": true
},
{
"description": "Auto load symbols when loading an .so file",
"text": "set auto-solib-add",
"ignoreFailures": false
}
]
},
"osx": {
"MIMode": "gdb",
"setupCommands": [
{
"description": "Enable pretty-printing for gdb",
"text": "-enable-pretty-printing",
"ignoreFailures": true
},
{
"description": "Auto load symbols when loading an .so file",
"text": "set auto-solib-add",
"ignoreFailures": false
}
]
},
"windows": {
"MIMode": "gdb",
"setupCommands": [
{
"description": "Enable pretty-printing for gdb",
"text": "-enable-pretty-printing",
"ignoreFailures": true
},
{
"description": "Auto load symbols when loading an .so file",
"text": "set auto-solib-add",
"ignoreFailures": false
}
]
}
}
- Goto debug and run launch configuration
- Debug
Start a notebook and use the attach
method
import boxm2_adaptor as b
b.ocl_info()
or the launch
method with args
"args": [
"/opt/vip/wrap", "python", "-c", "import boxm2_adaptor as b; b.ocl_info()"
]
And add a breakpoint in bocl_info_process.cxx
and run.
- I get
ERROR: Unable to start debugging. Unexpected GDB output from command "-exec-run". During startup program exited normally.
orUnable to start debugging. Attaching to process 108 with GDB failed because of insufficient privileges with error message 'ptrace: Operation not permitted.'.
every time I try to launch/attach, and the program just runs.
This is seen when you don't have ptrace permissions. Try adding export VIP_GDB=1
do your local_vip.env
- I can't (or am afraid that I can't) attach fast enough. Between the time I figure out what pid to attach to and I attach the debugger the code I want to debug will already have executed.
- For Notebook, you can just attach before running the commands of interest
- For Celery, you can set
VIP_NUMBER_CORES=1
and then you know which worker pid to attach to before running a task - For Celery, you can also set
VIP_CSTOP_ON_START=".*"
or some other regular expression that matched the task name to cause the python process to SIGSTOP on start of the task. This gives you all the time you need to attach and then continue from the debugger - The last method provided is Method 4 for a how to change the C/C++ code and recompile so that when it gets to a specific part of the code, it will stop for you. This is the least user friendly way, but kept for completeness
- Run
just gdb
. This will start gdb inside the docker in tui mode - Select the process to attach to
- Debug!
Note: When debugging celery, knowing which process to attach to can be a problem. The easiest solution to this is to add the following to your local_vip.env
file
export VIP_NUMBER_CORES=1
- Add
export VIP_GDBSERVER=1
to yourlocal_vip.env
file - Determine the pid of the target process.
- Start gdbserver in normal attach mode. Run
./just gdbserver2
After selecting the container you want to run gdbserver in, a list of pids running in that container will be displayed. Enter the number for the pid you want to attach to - A file
gdbinit
is auto generated in the base repo directory. This is a list of commands that need to be run when starting gdb. You can usegdb -e {voxel_globe_repo_dir}/gdbint
if using pure gdb - Connect your favorite gdb debugger to remote gdb, using
localhost:{port#}
printed by the./just gdbserver2
command.
- Start Eclipse (with CDT installed)
- Run-> Debug Configuration...
- Select
C/C++ Remote Application
- Create New Launch Configuration and name it
- Under the Main tab, a
C/C++ Application
must be specified. Anything will do, try/bin/ls
. This is just how the Eclipse CDT is hard coded to behave. - Under Debugger Tab, In the Main sub-tab under
GDB command file
enter{voxel_globe_repo_dir}/gdbinit
. The gdbinit gets updated by the./just gdbserver2 command
every time. - Under the Connection Sub tab, you will need to manually update the hostname and port number every time you restart a container.
- You no longer have to (but can if you want) Go to the source tab and add
{voxel_globe_repo_dir}/external
to get the source files to break correctly. This is also handled bygdbinit
now - Click Debug. From now on you can just run Launch. You will only have to reconfigure it every time the port number changes (or you switch which container you are debugging, since it will be a different port number)
Less Preferred
-
Modify C. Add
#include <signals.h> //At top of file printf("%d\n", getpid()); raise(SIGSTOP); // Where you want the debugger to attach
Or c++
#include <csignal> //At top of file cout << getpid() << endl; raise(SIGSTOP); // Where you want the debugger to attach
-
Run
./just vxl restart celery
Ideally, printing in a piece of python/C++ code should output to the logs. This is sometimes not direct, trivial, or instant, making debugging difficult. This section goes over each program to explain the caveats
-
Nginx
There is no custom code running on nginx. There is a 3rd party mod needed to echo in an nginx config, not included in voxel_globe
-
Asgi (runworker)
stdout
andstderr
should both just work inpython
andc
.stderr
always just works.stdout
was buffered, but with the-u
flag added, it should work too now. -
Uwsgi
stdout
andstderr
should both just work inpython
andc
-
Daphne
stdout
was buffered, but with-u
flag added, it is now unbuffered. Currently no voxel_globe code runs in Daphne, so there is no way to print in it, but there is no need to either. If there ever is,stdout
andstderr
should work -
rabbitmq N/A
RabbitMQ is a black box, soon to be replaced by redis
-
Notebook
Notebook is tricky. Python
python stdout
prints in notebook, andpython stderr
prints in the notebook, in red.C stdout
, andC stderr
print out in the logs. -
Redis N/A
Redis is a black box. There is a "redis desktop manager" useful for exploring redis with.
just build_rdm rdm
-
Flower N/A
-
Postgresql N/A. There are ways to print out ever SQL query in Django
-
celery
Python
stdout
andstderr
are redirected to the logging via a LoggingProxy. There is no good way (and many bad ways) to capture Cstdout
andstderr
. One such way is usingvsi.tools.redirect.StdRedirect
with StdRedirect(open(os.path.join(voxel_globe.tools.log_dir(), self.request.id)+'_out.log', 'w'), open(os.path.join(voxel_globe.tools.log_dir(), self.request.id)+'_err.log', 'w')):