Example: Visual Discrimination ‐ interface - fdechaumont/micecraft GitHub Wiki

This code creates a visual interface for real-time monitoring and interaction with the touchscreen-based discrimination task for mice.

This is the code you need to start in order to get the visual interface and the running experiment.

While the experiment logic can run independently (relying on logs for monitoring), this script offers a graphical user interface (GUI) to visualize the animals' positions, the status of the experimental rooms, and provides a context menu for manual control over the experiment. Running this script will automatically start the underlying experiment logic by instantiating the VisualDiscriminationExample class from here.

Context for the code

The application window displays a simplified top-down view of the experimental setup. Mice are represented as colored rectangle, and their movements between a home area and testing rooms are animated. The status of various hardware components like gates and water pumps is also visually represented, but not the touchscreen.

A key feature of this interface is the right-click context menu, which provides extensive control over the experiment. Users can:

  • open or close gates
  • force an animal to proceed to the next experimental phase
  • adjust parameters for individual gates
  • manually trigger touchscreen events for debugging or intervention
  • see and change the images attributed for any animal
  • see the phase progression of each animal in the experiment

How to setup and modify this example

The visual setup is defined within the VisualDiscriminationExampleInterface.start method. The rooms are defined and positioned on the screen individually. You can add more VisualRoom instances and adjust their gate_pos and orientation to match your physical setup.

Experiment logic and classes presentation

This script runs in a separate thread to continuously update the GUI, reflecting the current state of the experiment managed by the VisualDiscriminationExample class.

  • VisualDiscriminationExampleInterface: The main class for the PyQt5 application. It manages the window, widgets, drawing events, and the context menu. It holds an instance of VisualDiscriminationExample to interact with the experiment's backend.
  • UserAction: A simple wrapper class to link a QAction from the context menu to a function call with its arguments. It standardizes how user interactions from the GUI are executed and logged.
  • VisualRoom: Represents the visual components of a single experimental room. It contains visual objects like WWGate (gate), Block (the room itself), and WWWPump (water pump). It binds these visual elements to their corresponding hardware logic objects in the main experiment.
  • WWMouse2: A visual representation of a mouse. It handles the drawing and animation of the animal on the screen. Its position is updated based on the location of the real mouse in the experiment.
"""
This code defines the visual application of a visual discrimination experiment
for mice. It must be used in conjunction with the VisualDiscriminationExample
class. The application includes a context menu that allows users to manually
change the state of the experiment, such as controlling the gates, forcing an
animal to proceed to the next phase, control the touchscreen image attribution,
etc. The displayed information represent the current state of the experiment.
Use it as an example or a template to create your own MiceCraft experiment.
"""

from typing import Callable, Literal

from PyQt5 import QtCore, QtGui
from PyQt5.QtWidgets import QAction, QApplication, QMenu, QWidget

import matplotlib.pyplot as plt
import numpy as np
from matplotlib.backends.backend_qtagg import FigureCanvasQTAgg
from matplotlib.figure import Figure

import threading
import time
import traceback
import sys
import logging
from datetime import datetime
import matplotlib.dates as mdates
import os

from enum import Enum
from random import randint
from visualexperiments.WWMouse import WWMouse
from visualexperiments.Block import Block
from visualexperiments.Wall import WWWallSide, WWWall, WWWallType
from visualexperiments.WWGate import WWGate
from visualexperiments.WWFed import WWWFed
from blocks.autogate.Gate import Gate, GateMode, GateOrder
from blocks.DeviceEvent import DeviceEvent
from blocks.autogate.Parameters import (
    OPENED_DOOR_POSITION_MOUSE,
    CLOSED_DOOR_POSITION_MOUSE,
)
from itertools import cycle
from visualexperiments.WWLever import WWWLever
from visualexperiments.WWPump import WWWPump
from visualexperiments.VisualStorageAlarm import VisualStorageAlarm

from blocks.autogate.Gate import Gate
from blocks.LMTBlock import LMTBlock
from experiments.ghfc.touch2.example_touchscreen import (
    VisualDiscriminationExample,
    TSImage,
)
from visualexperiments.WWMouse2 import WWMouse2
import colorsys


class UserAction:
    def __init__(self, action: Callable | None, *args) -> None:
        self.action = action
        self.args = args
        self.log: str | None = None

    def exec(self):
        """Log if necessary and execute the callable 'action' with 'args' if
        the callable is not None."""
        if self.log is not None:
            logging.info(f"[user_action] " + self.log)

        if self.action is not None:
            self.action(*self.args)


class VisualRoom:

    ALL: list[VisualRoom] = []

    @staticmethod
    def get_from_name(name: str) -> VisualRoom | None:
        """Get the VisualRoom object with the name 'name' in ALL."""
        for room in VisualRoom.ALL:
            if str(room) == name:
                return room
        return None

    def __init__(
        self,
        name: str,
        gate_pos: tuple[int, int],
        orientation: Literal["horizontal", "vertical"] = "horizontal",
    ) -> None:
        self.name: str = name
        if orientation == "horizontal":
            angle = 0
            room_shift = (1, 0)
        else:
            angle = 90
            room_shift = (0, 1)

        self.gate: WWGate = WWGate(gate_pos[0], gate_pos[1], self)
        self.gate.setName(name + "_gate")
        self.gate.setAngle(angle)

        self.block = Block(
            room_shift[0] + gate_pos[0], room_shift[1] + gate_pos[1], self
        )
        self.block.setName(name + "_block")
        self.wp: WWWPump = WWWPump(
            1 + gate_pos[0] + 0.4, gate_pos[1] - 0.4, self
        )
        self.wp.setName(name + "_pump")

        VisualRoom.ALL.append(self)

    def bind_to_experiment(
        self,
        experiment: VisualDiscriminationExample,
        visual_listener: Callable,
    ):
        room = experiment.get_room(name=self.name)
        if room is None:
            logging.info(
                "[warning] [visual_room_binding] " f"wrong_name {self.name} "
            )
            return
        self.gate.bindToGate(room.gate)
        room.gate.addDeviceListener(visual_listener)
        self.wp.bindToPump(room.wp)

    def __str__(self) -> str:
        return self.name


class VisualDiscriminationExampleInterface(QWidget):

    refresher = QtCore.pyqtSignal()

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)

        self.name = "Visual experiment monitoring"
        self.shutting_down = False
        self.animals: list[WWMouse2] = []
        # self.gates : typing.List[WWGate] = []
        self.rooms: list[Block] = []
        self.painters: dict[str, QtGui.QPainter]
        self.visualStorageAlarm = None
        print("hello")

    def on_refresh_data(self):
        self.update()

    def monitor_GUI(self):
        while not self.shutting_down:
            self.refresher.emit()
            time.sleep(0.1)  # define the 'FPS' of application

    def listener(self, event):
        print(f"Event received: {event}")

    def shutdown(self):
        print("Exiting...")
        self.shutting_down = True
        self.experiment.shutdown_experiment()

    def init_house(self, house_size: tuple[int, int] = (1, 1)):
        """Create a block widget house at 'block_pos'= [0, 0] and place it in
        'rooms' in first position.

        Parameters
        ----------
        house_size : tuple[int, int], optional
            Define the number of blocks that compose the house. They will be
            implemented along the x and y axes. By default (1, 1)
        """
        self.house = Block(
            0, 0, self
        )  # x and y are relative to block size (200)
        self.house.setSize(
            self.house.w * house_size[0], self.house.h * house_size[1]
        )
        self.house.setName("Big House")

    def init_rooms(self):
        for room in VisualRoom.ALL:
            room.bind_to_experiment(self.experiment, self.listener)

    def create_animal(self):
        """Instanciate an animal into 'animal_list'"""
        number = len(self.animals) + 1
        animal_x = 140
        animal_y = 120 + ((number - 1) % 2 * 100)

        house_pos_available: tuple[int, int] = (
            int(self.house.w / 200),
            int(self.house.h / 200),
        )
        house_void = 2 * house_pos_available[0] * house_pos_available[1] - len(
            self.animals
        )

        if house_void > 0:
            for x in range(house_pos_available[0]):
                for y in range(house_pos_available[1]):
                    if 2 * (x + 1) * (y + 1) > len(self.animals):
                        animal_x += x * 200
                        animal_y += y * 200
                        break
        else:
            animal_x = int(self.animals[-1].x)  # type: ignore
            animal_y = int(self.animals[-1].y) + 100  # type: ignore

        animal = WWMouse2(animal_x, animal_y, self)
        animal.number = number

        animal.vpos = {}
        animal.vpos["home"] = (animal_x, animal_y)
        animal.vpos["target_location"] = (animal_x, animal_y)
        animal.vpos["location"] = (animal_x, animal_y)
        animal.vpos["inertia"] = (0, 0)

        self.animals.append(animal)

        for animal in self.animals:
            rgb = colorsys.hsv_to_rgb(
                animal.number / len(self.animals), 0.2, 1
            )
            animal.setBackgroundColor(
                int(rgb[0] * 255), int(rgb[1] * 255), int(rgb[2] * 255)
            )

    def get_all_rfid(self):
        """Get all RFID (visual only) in *animals*."""
        return [animal.rfid for animal in self.animals]

    def update_rfid(self):
        """Update the RFID of virtual animals with those from experiment."""
        expe_rfid_list = sorted(self.experiment.get_all_rfid())

        while len(expe_rfid_list) > len(self.get_all_rfid()):
            self.create_animal()

        visu_rfid_list = self.get_all_rfid()

        for expe_rfid in expe_rfid_list:
            if expe_rfid not in visu_rfid_list:
                i = 0
                while (
                    i < len(self.animals) and self.animals[i].rfid is not None
                ):
                    i += 1
                self.animals[i].rfid = expe_rfid  # type: ignore

    def set_animal_target(self):
        """Set the target position in a room for smooth animal movement."""
        for animal in self.animals:
            rfid = animal.rfid

            if rfid is None:
                continue

            room = self.experiment.get_room(rfid_in=rfid)
            if room is None:
                animal.vpos["target_location"] = animal.vpos["home"]
                continue

            visual_room = VisualRoom.get_from_name(str(room))

            animal.vpos["target_location"] = (
                visual_room.block.x + 40,  # type: ignore
                visual_room.block.y + 75,  # type: ignore
            )

    def get_pen(self) -> QtGui.QPainter:
        """Get the QPainter object with fixed parameters."""
        pen = QtGui.QPainter()
        pen.setRenderHint(QtGui.QPainter.Antialiasing)
        pen.setPen(QtGui.QPen(QtGui.QColor(50, 50, 50), 2))

        font = QtGui.QFont("Console")
        font.setPointSize(10)
        font.setBold(False)
        pen.setFont(font)

        return pen

    def paintEvent(self, event):  # type: ignore
        """Draw animal shape and information."""
        super().paintEvent(event)
        painter = self.get_pen()
        painter.begin(self)

        self.update_rfid()

        y_text = 20
        for rfid, animal in self.experiment.animals.items():

            text = f"{rfid}   |   {animal.phase}   |   " + "   |   ".join(
                animal.progression_display
            )

            painter.drawText(
                QtCore.QRect(10, y_text - 20, 600, y_text + 20),
                QtCore.Qt.AlignmentFlag.AlignCenter,
                text,
            )
            y_text += 15

        painter.drawText(
            QtCore.QRect(700, 0, 200, 40),
            QtCore.Qt.AlignmentFlag.AlignCenter,
            self.experiment.info.name,
        )

        self.visualStorageAlarm.draw(painter, textRect=QtCore.QRect(625, 0, 50, 50))  # type: ignore

        self.set_animal_target()

        for animal in self.animals:
            xt, yt = animal.vpos["target_location"]
            xc, yc = animal.vpos["location"]
            xi, yi = animal.vpos["inertia"]

            xa = xt - xc
            ya = yt - yc

            xa /= 1000
            ya /= 1000

            xi += xa
            yi += ya

            xi *= 0.9
            yi *= 0.9

            xc += xi
            yc += yi

            animal.vpos["location"] = (xc, yc)

            animal.move(int(xc), int(yc))
            animal.update()

        painter.end()

    def contextMenuEvent(self, event):  # type: ignore
        """Build the context menu and return (menu, action_map).
        action_map links each QAction to a UserAction."""
        menu = QMenu(self)

        title = QAction(self.name, menu)
        title.setDisabled(True)

        menu.addSeparator()
        action_map: dict[QAction | None, UserAction] = {}

        # rooms
        # ----------------
        title = QAction("Rooms", menu)
        title.setDisabled(True)
        menu.addSeparator()

        # rooms re-initialisation
        menu_action = QAction("re-initialise all hardware", menu)
        user_action = UserAction(self.experiment.init_experiment)
        user_action.log = "re-init all hardware"
        action_map[menu_action] = user_action

        # gate scale setting
        for room in self.experiment.get_all_rooms():
            gate = room.gate
            room_menu = QMenu(str(room), menu)

            weight_menu = QMenu("gate expected weight", room_menu)

            power_ten = 10 ** int(np.log10(gate.mouseAverageWeight))
            weight_range = range(
                gate.mouseAverageWeight - power_ten,
                gate.mouseAverageWeight + power_ten,
                power_ten / 10,
            )

            for weight in weight_range:
                weight_action = QAction(f"{weight} g", weight_menu)
                user_action = UserAction(room.set_animal_weight, weight)
                user_action.log = "expected weight modified"
                action_map[weight_action] = user_action
                if weight == gate.mouseAverageWeight:
                    weight_action.setCheckable(True)
                    weight_action.setChecked(True)

            touch_action = QAction("correct touch", room_menu)
            user_action = UserAction(room.simulate_ts_event, True)
            action_map[touch_action] = user_action

            touch_action = QAction("wrong touch", room_menu)
            user_action = UserAction(room.simulate_ts_event, False)
            action_map[touch_action] = user_action

            display_action = QAction("random display", room_menu)
            user_action = UserAction(room.ts_random_display, TSImage.LIGHT)
            action_map[display_action] = user_action

        # animals
        # ----------------
        title = QAction("Animals", menu)
        title.setDisabled(True)
        menu.addSeparator()

        for rfid in self.experiment.get_all_rfid():
            rfid_menu = QMenu(rfid, menu)

            # go to next phase
            if rfid in self.experiment.animals.keys():
                menu_action = QAction("proceed to next phase", rfid_menu)
                user_action = UserAction(
                    self.experiment.animals[rfid].proceed_to_next_phase
                )
                action_map[menu_action] = user_action

            # modify touchscreen image
            ts_image = self.experiment.get_ts_image(rfid)
            ts_img_menu = QMenu("set TouchScreen image", rfid_menu)
            for img in list(TSImage):
                menu_action = QAction(img.name, ts_img_menu)
                user_action = UserAction(
                    self.experiment.set_ts_image, rfid, img
                )
                action_map[menu_action] = user_action
                if ts_image == img:
                    menu_action.setCheckable(True)
                    menu_action.setChecked(True)

        # menu execution
        # ----------------
        action = menu.exec_(self.mapToGlobal(event.pos()))
        if action in action_map:
            action_map[action].exec()
        else:
            logging.info(f"[user_action] user action canceled")

    def start(self):
        """Initialise the application."""
        self.experiment = VisualDiscriminationExample()

        room_names = [room.name for room in self.experiment.get_all_rooms()]
        VisualRoom(
            name=str(room_names[0]),
            gate_pos=(2, 0),
            orientation="horizontal",
        )

        self.init_house()
        self.init_rooms()

        self.resize(1000, 400)
        self.setWindowTitle("LMT blocks - gate rfid back test")

        self.thread: threading.Thread = threading.Thread(target=self.monitor_GUI)  # type: ignore
        self.refresher.connect(self.on_refresh_data)
        self.thread.start()

        self.visualStorageAlarm = VisualStorageAlarm()


def excepthook(type_, value, traceback_):
    traceback.print_exception(type_, value, traceback_)
    QtCore.qFatal("")


if __name__ == "__main__":

    sys.excepthook = excepthook
    visualExperiment = VisualDiscriminationExampleInterface()

    app = QApplication([])
    app.aboutToQuit.connect(visualExperiment.shutdown)

    visualExperiment.start()
    visualExperiment.show()

    sys.exit(app.exec_())

    print("ok")