Handling Risks in A2A Java Agents for A2A and MCP servers - vishalmysore/a2ajava GitHub Wiki

A2ajava uses the tools4ai library for converting Java methods to A2A tasks or MCP tools. Each method annotated with @Action becomes a task or a tool which can be called by an A2A client or an MCP client.

Risk Categorization in Tools4AI

In Tools4AI, actions are classified into three levels of risk:

public enum ActionRisk {
    LOW, MEDIUM, HIGH
}

By default, every action is considered LOW risk unless explicitly specified otherwise. This provides a baseline for assessing AI agent behavior and implementing necessary safeguards.

Defining Actions with Risk Levels

Tools4AI uses the @Action annotation to mark methods as callable by AI agents while specifying their risk level:

package com.t4a.annotations;
import com.t4a.api.ActionRisk;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

/**
 * This annotation marks a method as an action which can be called by AI.
 */
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface Action {
    String description() default "";
    ActionRisk riskLevel() default ActionRisk.LOW;
}

Examples of Risk Levels in Action

Example 1: Comparing Cars (Low Risk)

@Service
@Log
@Agent
public class CompareCarService {
    public CompareCarService() {
        log.info("Created CompareCarService");
    }

    @Action(description = "Compare two cars", riskLevel = ActionRisk.LOW)
    public String compareCar(String car1, String car2) {
        log.info(car1);
        log.info(car2);
        // Implement the comparison logic here
        return "This is better - " + car2;
    }
}

Example 2: Comparing and Placing an Order (High Risk)

@Agent
public class CompareCarService {
    public CompareCarService() {
        log.info("Created CompareCarService");
    }

    @Action(description = "Compare two cars", riskLevel = ActionRisk.LOW)
    public String compareCar(String car1, String car2) {
        log.info(car1);
        log.info(car2);
        // Implement the comparison logic here
        return "This is better - " + car2;
    }
    
    @Action(description = "Place an order for the car", riskLevel = ActionRisk.HIGH)
    public String compareAndPlaceOrder(String car1, String car2) {
        log.info(car1);
        log.info(car2);
        // Implement the order placement logic here
        return "This is better, so sending a buy order - " + car2;
    }
}

Benefits of Risk-Based Action Management

  1. Improved AI Safety: Prevents unintended execution of critical actions
  2. Transparency and Accountability: Enables developers to audit and control AI decisions
  3. Granular Control: Different actions can have different risk levels
  4. Regulatory Compliance: Helps meet legal and ethical standards

Complex Example: Vehicle Management Service

@Agent
public class VehicleManagementService {
    public VehicleManagementService() {
        log.info("Created VehicleManagementService");
    }

    @Action(description = "Check vehicle status", riskLevel = ActionRisk.LOW)
    public String checkVehicleStatus(String vehicleId) {
        log.info("Checking status for vehicle: " + vehicleId);
        return "Vehicle " + vehicleId + " is in good condition.";
    }
    
    @Action(description = "Schedule a service appointment", riskLevel = ActionRisk.MEDIUM)
    public String scheduleService(String vehicleId, String date) {
        log.info("Scheduling service for vehicle: " + vehicleId + " on " + date);
        return "Service for vehicle " + vehicleId + " scheduled on " + date;
    }
    
    @Action(description = "Initiate payment for service", riskLevel = ActionRisk.HIGH)
    public String initiatePayment(String vehicleId, double amount) {
        log.info("Initiating payment of " + amount + " for vehicle: " + vehicleId);
        return "Payment of " + amount + " initiated for vehicle " + vehicleId;
    }
}

Processing Actions with Risk Management

High-risk actions require explicit human approval before execution:

public Object processSingleAction(String prompt, AIAction action, 
    HumanInLoop humanVerification, ExplainDecision explain) throws AIProcessingException {
    if (action == null) {
        action = PredictionLoader.getInstance().getPredictedAction(prompt, AIPlatform.OPENAI);
        
        if (action == null) {
            return "No action found for the prompt: " + prompt;
        }

        if (action.getActionRisk() == ActionRisk.HIGH) {
            log.warn("This is a high-risk action and requires explicit approval.");
            return "This is a high-risk action and will not proceed without human intervention: " 
                   + action.getActionName();
        }
    }
    // ... rest of the processing logic
}