Our final class was one of the most memorable experiences I have had in an Amherst class. The Stencyl games I tried out were engaging and impressive, given the timeframe in which we had to complete them. Most fascinating where the various interfaces groups created and incorporated into their games. Enormous card box surfaces placed spatially around the player, a sunflower-shaped controller that snugly wrapped around the player’s hand, and a Hoberman Mini Sphere Rainbow modified to convert expansions and contractions into game controls were a few of the great interfaces I played with. Despite the ingenuity of our interfaces, one thing stuck out to me as a limitation that was unavoidable because of the fact that we had to use a Makey Makey. All our interfaces required a form of tactile input to play the games we created, which might prevent people such as the disabled from playing these games.
This observation made me think about the promise that touchless gaming interfaces may hold as universal interfaces: interfaces that can be used by anyone regardless of any disability. My first candidate was an interface that would enable users to control games by voice. An example would be Amazon’s Alexa, which we briefly talked about in class. However, this interface might not be usable by people born deaf and/or mute. Another promising interface is eye tracking, which is a feature that is found in Microsoft’s Kinect camera. But this interface would also be inaccessible to some populations, for example people born blind. The problem with interfaces, whether touch-based or touchless, is that they rely on a feedback loop between sensory input and output. You say something that you can comprehend and know is what you intend to say because you can hear what you just said, or you move your eyes in a direction that you are aware of because you can see your visual field shift in a direction that matches your expectation. This inherent sensory-feedback-loop aspect of all game interfaces prevents them from becoming universal interfaces, since they automatically exclude people who do not have the ability to complete these feedback loops.
If all interfaces require this feedback between sensory input and output, can there really be a universal, one-size-fits-all interface? I can imagine a brain interface that measures brain activity–an output all conscious humans can give–and uses that activity to control games. The type of brain activity that maps to each control would be calibrated prior to gameplay. The input part of the interface would need to have the ability to transmute events happening in game into any combination of sight, touch, and sound, so that the input players receive matches their sensory abilities. Such a device is probably infeasible with current technology, but ideas about possible universal interfaces might at least start a conversation about ways in which video game interface designers can make interfaces that are accessible to all.