In this specific case, the problem could be easily fixed by only offering the task "bring object" when an object was actually learned beforehand (e.g., the task could be greyed out in the MMUI).
The user interface on the tablet computer (MMUI) incorporated multiple external programs (e.g., Flash games, speech recognition, and the fitness functionality).
The behavior coordination communicates with the MMUI in a way that does not provide immediate feedback over the same channels of communication.
MMUI developers need very specific guidelines--preferably ones based on empirical evidence--to assist them in designing and implementing such interfaces.
Now, with the ubiquity of smartphones, wearables, and other mobile devices with screens (some of them small), designers are faced with creating user interfaces that merge GUIs and VUIs into multimodal user interfaces (MMUIs).
And for developers, the overarching goal should be to create MMUIs that are easy to learn--and easy (and safe) to use.
There is an art and science for designing a MMUI for a service, its transactions, tasks, and operations.
However, the road to a mobile wallet will only move forward through ongoing testing of MMUI prototypes to identify strengths, mitigate confusion, and seamlessly direct the user back on the success path.
To support a natural MMUI, designers create user interfaces that use either speech or keypad input for different tasks.
This new style of user interface, the MMUI, will be a combination of GUI and VUI.