Ethical Considerations - trap-fish/uav-human-detection GitHub Wiki
False Negative Considerations
While the accuracy (based on AP50) for the YOLOv11s-P2 model approached the 60% mark with an inference just under 30FPS, there are ethical considerations in using automated equipment in applications such as SAR, and it would be on the end user to assess whether the benefit of an autonomous device will outweigh the risks that arise from missed detections. These scenarios are false negatives and are reflected in the Recall score (as opposed to precision which is impacted by False Postives). Typically, the recall rate for most models observed was higher than the precision, and for this particular use case, holding more weight towards recall may have been a better option.
Transferability of Object Detection Models
While there is a useful purpose for this application, a tool that can relieve rescue workers from manually flying and inspecting UAV footage and find disaster victims quicker. One should note the ethical considerations regarding the transferability of this nature of research or any research into detection/tracking models. Joesph Redmon alluded to a similar concern in [1] almost a decade ago. Since then, accessibility to faster, more efficient AI models has increased, yet the questions raised then remain unanswered, what should we do with these fast and accurate detection models when we have them? And is there anything that can be done to prevent research with good intentions such as tracking animals for environmental purposes being used to cause harm to other people?
This is a serious ethical consideration that has no clear answer, if there even is an answer. Regulation must have some viable routes, though likely at a risk of stiffling innovation while at the same time still running the risk anyone who wants to use these tools for harm continue doing so.
[1] J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” Apr. 2018, arXiv:1804.02767 [cs]. [Online]. Available: http://arxiv.org/ abs/1804.02767