DeploymentTestSet - skilchen/bots GitHub Wiki
Build a good test-set
When you do changes in your edi environment, you want to know that every 'ran as before'. What can be helpful for this is to use a isolated acceptance test for this.
This is easy to demonstrate:
- Download a plugin from bots sourceforge site and install it (not on your production environment;-))
- In config/bots.ini set 'runacceptancetest' to True
- Run bots-engine via command-line
How this works: in acceptance tests an extra script usersys/routescripts/bots_acceptancetest.py runs when all routes are finished. This script does 2 things:
- It compares the results (#files received, errors, send, etc) with the expected results. If results are different you'll see this (on command-line window).
- The files in botssys/infile/outfile are compared with files as generated by the run in botssys/outfile. If results are different you'll see this (on command-line window).
Some things to look at when you build a test-set:
-
Use the 'acceptance test path' in the channels to point to your file system for incoming and outgoing channels (prevents using communication methods like pop3, ftp, etc).
-
Test file in botssys/infile are added to plugins (I find this very convenient).
-
Counters (for message numbers, file names etc (via unique()) are the same in every run, so results are the same every run.
-
If date/times need to made, use transform.strftime() for this; it is like pythons time.strftime() but gives always the same date/time in acceptance testing.