future of Mobile Interface




Touch interfaces will evolve and become a mainstay of how humans interact with objects in the future. From the iphone with it’s attractive touch interface from a mobile standpoint to iphone clones to Windows from microsoft featuring windows 7 touting touch interfaces to new hardware equipment such as PCs nowadays coming with touch screens, the premise is “to interact”. This will be the precursor to a technology lifestyle that epitomises the sense of touch being elevated up into the spotlight as the next leap into human-device interaction.
From visual where we look at the screen, aural when we listen to music, we can observe the evolution of devices built that are meant to evoke human senses in it’s nascent form, growing to incorporate all facets possible of existing human senses. Probably the next big leap will see sense of smell being integrated and being pushed into the forefront with mimicking of actual real life situations whether you are trekking through a rubbish infused dump and able to smell the stankiness of the surroundings as part as the virtual experience playing a FPS or inhaling the sweet scent of green after a downpour in a forest as a cinematic experience accompaniement.
One possible element of interaction with mobile phones is to do away with the ability to touch, leapfrog it and from user handsigns, be able to access shortcuts on the phone or dial a number with a particular handsign integrated with a lightweight mobile device that is worn by the human being which can interpret hand signs. That would be the next possible leap in mobile interface unless we come out with technology that enables humans to give commands via brain waves to devices with equipment capable of detecting human brain waves and converting that to machine language that sends commands wirelessly over to another device,
As mobile devices become ubiquitous they will specialize and interface design patterns will emerge. Voice is definitely immature right now. My Blackberry’s voice dial feature has an extremely high error rate. There is no reason this couldn’t be improved, we have the technology. When doing tasks that require your hands to be free (like driving) voice is the best of the known paradigms.
Everyone is in awe of the iPhone touch interface. It has its advantages such as lack of moving parts and dynamic configuration of layout. However, touch isn’t the magic bullet. You can’t feel a touch screen to do touch typing. It’s hard to use one handed or while moving through an airport or in a bumpy car or train.
I think the real solution will be the evolution of both touch and keypads (like chorded ones), combined with voice and one more important thing—integration with other devices. The integration of Palm, and later RIM devices with desktops and servers was the key to their success. Sometimes the best solution is allowing users to grab information (Nokia E71 scans business cards) or interact with it via another device like a car dashboard or PC.
Once devices become more context aware they will also be able to be smarter, eliminating the need for some types of user input. They will just start reading things from the environment. I should be able to read barcodes or RFID tags and add them to my wish list on Amazon or my grocery list. My GPS enabled phone should know I’m driving to an appointment and offer directions to the address on my calendar.

Touch interfaces will evolve and become a mainstay of how humans interact with objects in the future. From the iphone with it’s attractive touch interface from a mobile standpoint to iphone clones to Windows from microsoft featuring windows 7 touting touch interfaces to new hardware equipment such as PCs nowadays coming with touch screens, the premise is “to interact”. This will be the precursor to a technology lifestyle that epitomises the sense of touch being elevated up into the spotlight as the next leap into human-device interaction.
From visual where we look at the screen, aural when we listen to music, we can observe the evolution of devices built that are meant to evoke human senses in it’s nascent form, growing to incorporate all facets possible of existing human senses. Probably the next big leap will see sense of smell being integrated and being pushed into the forefront with mimicking of actual real life situations whether you are trekking through a rubbish infused dump and able to smell the stankiness of the surroundings as part as the virtual experience playing a FPS or inhaling the sweet scent of green after a downpour in a forest as a cinematic experience accompaniement.
One possible element of interaction with mobile phones is to do away with the ability to touch, leapfrog it and from user handsigns, be able to access shortcuts on the phone or dial a number with a particular handsign integrated with a lightweight mobile device that is worn by the human being which can interpret hand signs. That would be the next possible leap in mobile interface unless we come out with technology that enables humans to give commands via brain waves to devices with equipment capable of detecting human brain waves and converting that to machine language that sends commands wirelessly over to another device,
As mobile devices become ubiquitous they will specialize and interface design patterns will emerge. Voice is definitely immature right now. My Blackberry’s voice dial feature has an extremely high error rate. There is no reason this couldn’t be improved, we have the technology. When doing tasks that require your hands to be free (like driving) voice is the best of the known paradigms. Everyone is in awe of the iPhone touch interface. It has its advantages such as lack of moving parts and dynamic configuration of layout. However, touch isn’t the magic bullet. You can’t feel a touch screen to do touch typing. It’s hard to use one handed or while moving through an airport or in a bumpy car or train. I think the real solution will be the evolution of both touch and keypads (like chorded ones), combined with voice and one more important thing—integration with other devices. The integration of Palm, and later RIM devices with desktops and servers was the key to their success. Sometimes the best solution is allowing users to grab information (Nokia E71 scans business cards) or interact with it via another device like a car dashboard or PC. Once devices become more context aware they will also be able to be smarter, eliminating the need for some types of user input. They will just start reading things from the environment. I should be able to read barcodes or RFID tags and add them to my wish list on Amazon or my grocery list. My GPS enabled phone should know I’m driving to an appointment and offer directions to the address on my calendar.