Discussion.Input System - Thargoid/pioneer GitHub Wiki
Pad archive as of 26 Oct 2011 00:56 UTC
REQUIREMENTS
- Support button and key-remapping
- Support alternative keys for an action (e.g., Joystick Button 1, or Space key)
- Support trigger and axis actions
- Detect ambiguities in the mapping set
- Allow shortcuts that use modifier keys
- Group actions into action sets that can be enabled/disabled as a whole
- Support modal input/input capture (e.g., the key binding options screen needs to be able to capture all input; text entry widgets need to be able to capture text input)
PLANNED DESIGN
Actions:
- An Action is something like zooming, firing lasers, hyperspacing, switching view, rolling and so on. Actions come in two types: TriggerAction and AxisAction.
- Trigger actions fire once when the given button is pressed. Most GUI shortcuts would be implemented as trigger actions. Firing lasers would not be a trigger action, because the laser is supposed to keep firing while the key/button is held down. Triggers include mouse buttons I assume, mouse gestures like in Blender where a flick is detected might apply
- Axis actions have a current value. Axis actions are quite flexible, and probably easiest to understand through examples:
- Zoom controls would be a single axis action. One key maps to +zoom, one key maps to -zoom. The scroll wheel can also be mapped to the same Axis action.
- Manual flight orientation controls would be a set of axis actions (roll, pitch, yaw). Keyboard controls for these would be mapped in positive/negative pairs. These controls would probably also be mapped to the joystick.
- robn: I would imagine that keyboard would add +1/-1, while a joystick or other analog device would add some amount 0<x<1. So you'd map E and +JoyAxis0 to +roll, and both would do the right thing?
- Manual flight linear thrust controls would be a set of axis actions similar to the orientation controls, but might have opposing thrusters mapped separately (? not sure). The main (forward) thruster could be mapped to a joystick throttle lever.
It would be intuitive, for some people, to have the opposing thrusters work simultaneously, even though releasing the key would be logical
- Don't understand what you mean by "releasing the key would be logical". If thrusters are mapped as one action per axis then you have a binding for +z and a binding for -z (two keys, if neither is pressed you're not thrusting; if both are pressed you're not thrusting). If thrusters are mapped as separate actions for opposing directions you get one binding for +forward and one binding for +backward (two keys, if neither is pressed you're not thrusting, if both are pressed, you're thrusting both ways and probably accelerating forwards since the main thruster is more powerful)
- I was thinking of equal side thrusters(where releasing one thruster is the same as firing both), for non-symmetric thrusters like forward/reverse it's better to map separately..there might end up being fuel consumption,noticeable lack of animation, and even a gameplay relevant burning effect of being too close which might justify not having a switch off effect for symmetric thrusters firing at once.
- Ok. I think we're in agreement (that it's better to map opposing thrusters to separate actions)
- Lasers would be mapped to an axis action, with one key mapped to +lasers.
- Firing simultanously from multiple lasers?
- I was assuming the current behaviour would be retained, so there's one key to fire and it fires whichever laser matches your current view.
- For multiple views (eg, multiple turrets) I would think mouse controls would be better, but haven't considered that before.
- different modes for different purposes might be catered for as a part of API I guess (controlling drones, specific equipment related overrides)
- I wouldn't expect the input layer itself to have to care about that -- obviously different combinations of actions/bindings/bindingsets can be set up by the game code as necessary. As long as the input layer doesn't have any fundamental restrictions that we think will cause problems.
- I was thinking along the lines of associating a set of bindings according to a mutually exclusive or a priority based system and switching between these instead of having software figure things out
- InputHandler stack gives binding priorities. The mutual exclusion stuff (detecting ambiguities in the bindings) is I think trickier and I'm still unsure about it (see notes at bottom). Suggestions welcome of course :-)
Bindings:
- InputButton and InputAxis are auxiliary types that specify one button (key, key+modifiers, mouse button, joystick button) or one axis (scroll wheel, joystick axis, mouse axis). They also provide string conversions for that data.
- A Binding takes input events and maps them onto one Action. The Binding specifies the way the input is mapped (e.g., a BindButtonToAxis binding would have axis values for pressed and un-pressed), and has an InputButton or InputAxis value specifying what button or axis it responds to.
Input handling stack:
- Input events are handled by sending them down a stack of InputHandler objects. Each InputHandler in the stack gets an opportunity to capture it.
- A BindingSet is an InputHandler which has references to a set of bindings and captures input for them, passing it on to their input mapping methods. An BindingSet can also detect ambiguities between its bindings.
- robn: clarify please. Is a BindingSet intended to hold a set of binds for a specific function (eg joystick manual flight, so bindings for pitch, yaw, roll, thrust, etc) or a set of binds for a particular mode/screen (eg the whole set of bindings for the sector view). Hmm, perhaps these aren't actually different. Presumably then individual sets could be active/inactive depending on game context?
- There will be a few other custom InputHandler types for specific purposes (text entry, key-binding screen)
Examples:
- When mouse flight controls are enabled, mouse events should be captured by pushing a BindingSet containing bindings for the mouse x & y axes (of course the mouse also has to be captured & hidden)
- All GUI keyboard shortcuts should become TriggerActions. Typically each GUI screen would have a BindingSet for its actions, which it pushes when it's shown and pops when it's hidden.
- Text entry widgets will require a custom InputHandler type which it should push onFocus and pop onBlur.
- Scrollable GUI widgets will have to push a BindingSet onMouseEnter to capture scroll-wheel events (? not sure I like this)
- External view, sector view and system view, should share a BindingSet for their view controls (zooming and rotation). A little care will have to be taken to avoid the current problem that scrolling in one of these views zooms in all of them.
Open questions/problems:
- What should the exact criteria for ambiguities between controls in a single BindingSet be? I'm particularly thinking of modifiers here. We have several trigger actions that use modifier keys (e.g., the time acceleration shortcuts). These shouldn't be considered ambiguous with shortcuts that don't use modifiers. However, what if someone tries to bind a modifier key itself to an action (trigger or axis)?
- robn: SDL handles modifier keys separately, via a bitmap which you test to see if the modifier is down along with the normal key etc. I'd be inclined to make it impossible to explicitly map a modifier key.
- The above only provides for ambiguity detection between controls in a single BindingSet, but BindingSets might be quite small -- perhaps BindingSets themselves need to be grouped and ambiguity detected within a whole group of BindingSets?
- robn: If a group of BindingSets can be logically grouped, should they just be a single BindingSet? If they're designed to be separately active/inactive based on game context, then I wouldn't try to detect ambiguity between sets. I'd probably make it so the same input event triggers multiple Actions (unless a sane way to tell the player there's a problem can be found).