swap_ai_script_lua - ryzom/ryzomcore GitHub Wiki


title: Swap out AI script for Lua description: published: true date: 2023-03-15T00:02:58.966Z tags: editor: markdown dateCreated: 2023-03-14T23:55:26.888Z

Some notes on swapping out AI script for Lua

Lua sandbox

Ryzom Ring generates AI scripts on the client end, so dangerous scripts need to be able to run safely and be quarantined on R2 shards. Normally, R2 scripts are very short, so excessive runtime is unusual.

Person:

Can we sandbox Lua so it's safe to use as an alternative AI script in Ryzom Core? Scripts may be written for NPCs by players and uploaded to the server, so they're not coming from a trusted source.

ChatGPT:

Yes, you can sandbox Lua to make it safer for use in Ryzom Core as an alternative AI scripting language. Sandboxing involves creating a restricted environment where only specific functions and libraries are accessible, which helps protect the host application from potentially malicious code.

Here are some steps to create a basic Lua sandbox:

  1. Create a new Lua state with luaL_newstate() or lua_open() functions.
  2. Open only the necessary standard Lua libraries using luaL_openlibs(). You can selectively open libraries by calling the luaopen_* functions for the specific libraries you want to include, instead of using luaL_openlibs() which opens all the standard libraries.
  3. Remove or replace any potentially dangerous functions from the global environment, such as os.execute or io.open. To do this, use lua_pushnil() and lua_setglobal() functions.
  4. Create a new environment table with only the allowed functions and libraries. Set this table as the environment for the untrusted Lua code using lua_setfenv().
  5. Use luaL_loadbuffer() or luaL_loadfile() to load the untrusted Lua code. Then, use lua_pcall() to run the loaded code inside the sandboxed environment.

Keep in mind that these steps provide a basic level of sandboxing. Depending on your use case, you might need to implement additional security measures or restrictions. For instance, you could enforce limits on the amount of memory and CPU time consumed by the scripts.

By implementing a sandboxed environment, you can provide a safer way for users to upload and run their AI scripts for NPCs, minimizing potential risks to the Ryzom Core server.

Person:

So possibly give each NPC manager their own Lua sandbox...

ChatGPT:

Yes, that's a viable approach. Giving each NPC manager its own Lua sandbox can further enhance security and isolation between scripts. This can prevent potential issues, such as one NPC manager accidentally or maliciously affecting another NPC manager's behavior or data.

Here's a high-level outline of how to implement this:

  1. When initializing each NPC manager, create a new Lua state for it and set up a sandboxed environment as described in the previous answer. Make sure to include only the necessary libraries and functions that the NPC manager will require.
  2. Load and run the AI scripts for each NPC manager within their respective sandboxed Lua states. This ensures that the AI scripts are isolated from each other and have no access to other NPC managers' data or functions.
  3. As you execute the AI scripts, use the appropriate Lua API functions to communicate between the Ryzom Core engine and the sandboxed Lua state. This allows the engine to control the NPCs and receive updates from the AI scripts.

This approach not only enhances security but also improves maintainability by keeping each NPC manager's AI scripts separate and self-contained. Additionally, it can help with debugging and error handling, as issues in one NPC manager's script will not affect others.

Person:

How do we protect against users putting infinite loops inside scripts? The AI service is single threaded

ChatGPT:

Protecting against infinite loops in user-submitted scripts can be challenging, especially in a single-threaded environment. One approach is to introduce a hard limit on the number of instructions a script is allowed to execute. In Lua, you can use debug hooks to achieve this.

Here's a high-level outline of the steps:

  1. Define a function that will be called when the instruction limit is reached. This function should raise an error or terminate the script execution.
function instruction_limit_exceeded()
  error("Script execution exceeded instruction limit")
end
  1. Set a debug hook with the debug.sethook() function to call the instruction_limit_exceeded() function after a specific number of instructions have been executed. The hook should be set before executing the user-submitted script.
local instruction_limit = 100000 -- Set an appropriate limit based on your requirements
debug.sethook(instruction_limit_exceeded, "", instruction_limit)
  1. Load and execute the user-submitted script in the sandboxed environment. If the script exceeds the instruction limit, the instruction_limit_exceeded() function will be called, and the script will be terminated.

Note that setting an instruction limit may impact the performance of the AI service, as the Lua interpreter must check the instruction count frequently. You should balance the instruction limit and performance based on your requirements and the complexity of the AI scripts you expect to handle.

Keep in mind that this approach does not prevent other forms of resource exhaustion, such as memory usage or file I/O. You should ensure that your sandboxing solution also addresses these potential issues.

Person:

Can we prevent the user from removing this limiter hook?

ChatGPT:

Yes, you can prevent the user from removing the limiter hook by restricting access to the debug library in the sandboxed environment. When you create a sandboxed environment for user-submitted scripts, you can control which Lua libraries and functions are accessible. By not exposing the debug library (or just the debug.sethook() function) to the user, they will not be able to remove or modify the limiter hook you set.

Here's an example of how to create a sandboxed environment without the debug library:

-- Create a safe environment for user scripts
local safe_env = {}

-- Add only the safe global functions and libraries
for k, v in pairs(_G) do
  if k ~= "debug" then
    safe_env[k] = v
  end
end

-- Alternatively, if you want to allow some functionality of the `debug` library
-- but not the `sethook` function, you can create a limited version of the `debug` library:
local limited_debug = {}
for k, v in pairs(debug) do
  if k ~= "sethook" then
    limited_debug[k] = v
  end
end
safe_env["debug"] = limited_debug

-- Load and execute the user script in the sandboxed environment
local user_script = "..." -- User-submitted script
local user_function, err = load(user_script, nil, "t", safe_env)
if not user_function then
  error("Error loading user script: " .. err)
end

-- Set the instruction limiter hook
local instruction_limit = 100000
debug.sethook(instruction_limit_exceeded, "", instruction_limit)

-- Run the user script
user_function()

By using this approach, the user will not have access to the debug.sethook() function, and they will not be able to remove or modify the limiter hook you set. Remember to balance security and functionality, as you may need to provide certain functionality from the debug library to the user.

Backwards compatibility

Person:

And then for supporting legacy AI script, would it be easiest to simply write a class in the AI service C++ code that parses the old AI scripts and translates them to the Lua syntax? That way we only have to maintain Lua interfaces for the native AI script functions. Any alternative suggestions for backwards compatibility?

ChatGPT:

Translating the old AI scripts to Lua syntax by writing a parser in the AI service C++ code is one way to maintain backward compatibility while focusing on supporting Lua interfaces for native AI script functions. This approach allows you to consolidate the scripting functionality into Lua and reduce the maintenance overhead of supporting two separate scripting systems.

Here are a couple of alternative suggestions for backward compatibility:

  1. Dual Scripting Engine Support : Maintain both the old AI scripting engine and the new Lua-based engine. You could implement a mechanism to detect the type of script being used and route it to the appropriate engine. This approach might require more maintenance and debugging but allows for a smooth transition between the old and new systems.
  2. Automatic Script Conversion Tool : Develop a standalone tool that can automatically convert old AI scripts to Lua scripts. This tool can be provided to users, allowing them to convert their existing AI scripts to the new Lua-based system before uploading them to the server. Once the scripts are converted, only the Lua interfaces need to be maintained. This method requires an initial investment in creating the conversion tool but could reduce maintenance in the long run.

Each of these approaches has its own set of trade-offs in terms of maintenance, performance, and ease of use. You may want to consider which option best aligns with your project goals and resources before making a decision.

Person:

Regardless of the solution, our old AI script native functions are 0 index based, while Lua is normally 1 index based. Does it seem reasonable to transition all the AI script interface functions to 1 index based, for consistency? The script translator utility would have to be aware of this as well.

ChatGPT:

Yes, transitioning all the AI script interface functions to be 1 index-based for consistency with Lua would be a reasonable decision. This change would make it more intuitive for users familiar with Lua, as they would not have to adjust to a different indexing system while working with the AI script interface functions.

If you decide to go with this approach, you would need to update the script translator utility to handle this change. The utility should be able to detect the 0 index-based functions in the old AI scripts and convert them to the corresponding 1 index-based functions in the Lua syntax. This would ensure a smooth transition and maintain compatibility between the old and new systems.

Keep in mind that making this change might have some implications on the existing AI scripts, which would need to be updated accordingly. Make sure to thoroughly test the updated scripts to identify and resolve any issues caused by the change in indexing.

⚠️ **GitHub.com Fallback** ⚠️