Jump to content
BrainDen.com - Brain Teasers
  • 0


EDM
 Share

Question

9 answers to this question

Recommended Posts

  • 0

That was really cool. I'm sure that the major advances we will see in computer technology in the next few years will be dominated by improvements in interfacing and integrating systems into real life. This has been an underdeveloped area for years and I'm excited to see it move on. I wonder if any Braindenners have any good ideas about how this will move forward?

Here's what I think:

Motion sensing is a big area of future development IMO. Consider the amazing phenomenon of the Wii. Not amazingly good, mind you, but just amazing that so many people would buy one considering how crap it is. I was expecting it to be a device that mirrors your movement on-screen, translating that into game play in a seamless and intuitive way. It turned out to be a clumsy and confusing interface, but still really popular. Imagine how many people would buy one if it worked the way it ought to! I don't think it will be long before we have motion sensors that pick up gestures and motion accurately without the need for any held devices. Consider how well that would work with 3D TV... mmm. Although I'd like to see it work without a held device, fingertip covers to provide pressure feedback would enhance the experience, when grabbing, typing, etc.

Only thing about the sixth sense unit that doesn't float my boat is the projection device. While it offers a way to extend information onto objects around you, it seems a bit clumsy. Plus, the need isn't sufficient. As long as we have mobile phones, we will always have a hand held device with graphical display, and I also think it likely that there will be a trend toward ePaper devices. A large screen which folds or rolls up small may give a good display size without compromising privacy, and IMO has more versatility than projection. Coupled with good motion sensing technology it would give you an equally cool interface, and where an information overlay on everyday objects is really required, transparent ePaper could do it, or your device could simulate transparency by capturing an image and displaying it lifesize, with information overlay. Though if we go the way of projection it would at least be amusing to see people at airports and railway stations crowding around the walls jostling for a bit of projection space.

Link to comment
Share on other sites

  • 0

That was really cool. I'm sure that the major advances we will see in computer technology in the next few years will be dominated by improvements in interfacing and integrating systems into real life. This has been an underdeveloped area for years and I'm excited to see it move on. I wonder if any Braindenners have any good ideas about how this will move forward?

Here's what I think:

Motion sensing is a big area of future development IMO. Consider the amazing phenomenon of the Wii. Not amazingly good, mind you, but just amazing that so many people would buy one considering how crap it is. I was expecting it to be a device that mirrors your movement on-screen, translating that into game play in a seamless and intuitive way. It turned out to be a clumsy and confusing interface, but still really popular. Imagine how many people would buy one if it worked the way it ought to! I don't think it will be long before we have motion sensors that pick up gestures and motion accurately without the need for any held devices. Consider how well that would work with 3D TV... mmm. Although I'd like to see it work without a held device, fingertip covers to provide pressure feedback would enhance the experience, when grabbing, typing, etc.

Only thing about the sixth sense unit that doesn't float my boat is the projection device. While it offers a way to extend information onto objects around you, it seems a bit clumsy. Plus, the need isn't sufficient. As long as we have mobile phones, we will always have a hand held device with graphical display, and I also think it likely that there will be a trend toward ePaper devices. A large screen which folds or rolls up small may give a good display size without compromising privacy, and IMO has more versatility than projection. Coupled with good motion sensing technology it would give you an equally cool interface, and where an information overlay on everyday objects is really required, transparent ePaper could do it, or your device could simulate transparency by capturing an image and displaying it lifesize, with information overlay. Though if we go the way of projection it would at least be amusing to see people at airports and railway stations crowding around the walls jostling for a bit of projection space.

I think I've seen this video before (though I can't watch it right now, but I did glance at it to check). I have to admit that I thought the projection unit was clunky. A favorite Sci-Fi thing is like the lightsaber or floating holographic menu where the light "projects" onto a fixed space and stops at a fixed distance (defying all known laws of physics so far as I'm aware), but if the projection unit could just project images onto empty space, that would make it more useful. :thumbsup:

Regarding the Wiimote, I think that Nintendo has been able to do some neat things with it, though most of the innovation with it is done in-house. One exception to that of course is Johnny Chung Lee who has done some pretty cool tricks using the Wiimote in conjunction with other hardware.

Of course the other console manufacturers are getting on board and coming out with their own things. The Playstation 3 controllers already have motion-sensing with more things in the works and Microsoft is working on Project Natal, though all of those efforts are aimed more in the gaming direction, than for general daily use and such.

Link to comment
Share on other sites

  • 0
I think I've seen this video before (though I can't watch it right now, but I did glance at it to check). I have to admit that I thought the projection unit was clunky. A favorite Sci-Fi thing is like the lightsaber or floating holographic menu where the light "projects" onto a fixed space and stops at a fixed distance (defying all known laws of physics so far as I'm aware), but if the projection unit could just project images onto empty space, that would make it more useful. :thumbsup:
Dang those pesky laws of physics

Regarding the Wiimote, I think that Nintendo has been able to do some neat things with it, though most of the innovation with it is done in-house. One exception to that of course is Johnny Chung Lee who has done some pretty cool tricks using the Wiimote in conjunction with other hardware.
Wow, I love that head tracking device! That's so much closer to what the Wii ought to do. Actually I didn't know the technology of the Wiimote was so impressive, you'd never guess from using one. This is what baffles me about the Wii, if they could do stuff like that, why are Wii games so counterintuitive? Admittedly I haven't played many, just the Wii sports stuff, but it was a great disappointment to see that the device makes little or no attempt to reflect your actual movements, and actually makes for a more confusing and arbitrary control system than what we had previously.

Fingers crossed that Project Natal works well. I really think that if it does, it will rapidly be expanded to applications beyond gaming. The real wonder of Mistry's kit was the intuitive gesture recognition, used in an integrated way between devices. Suppose you had an ePaper device with a miniature Project Natal - style sensor on it, coupled with a home bristling with sensors and interactive gadgets which knew, amongst other things, their physical position (plus dumb objects with barcode-like markers which the sensors could pick up). You could then use gesture to provide information interaction with your entire environment. Pick up a can of beans and make the "more" gesture to order more beans. Tell the window to open with a sweep of your hand. Look up information about any object by gesturing from it to your hand held device. Lights would turn on and off around you as you wander around the house. A reading light comes on if you look at a book or a piece of paper and the ambient light is insufficient for reading, that kind of thing. And a well-tuned gesture recognition device is of course a far better way to interact with your PC/TV/whatever, than the clunky interface devices we use now.

Link to comment
Share on other sites

  • 0

Wow, I love that head tracking device! That's so much closer to what the Wii ought to do. Actually I didn't know the technology of the Wiimote was so impressive, you'd never guess from using one. This is what baffles me about the Wii, if they could do stuff like that, why are Wii games so counterintuitive? Admittedly I haven't played many, just the Wii sports stuff, but it was a great disappointment to see that the device makes little or no attempt to reflect your actual movements, and actually makes for a more confusing and arbitrary control system than what we had previously.

In some cases, they did try to mimic human motions, but there are usually more arbitrary methods that work better. I think that it reacts to a regular golf swing, but you can get better distance with a flick of the wrist. One exception I've heard of is "Metroid Prime 3" which is supposed to have some pretty neat uses of the wiimote (I haven't actually tried it myself since I don't own a Wii, but I've heard good things from other people). As an example, you have a grappling hook that you can use to grab onto objects and yank them. So if you jerk the controller up when you grab an enemy's shield, you can rip it out of his hands, making him vulnerable to attack. I'm not sure why Nintendo hasn't done more of that, but I suspect it has to do with the fact that the sport games garner larger sales... :dry: So far as I'm aware, any third-party games that have tried to make use of the wiimote have failed utterly, so most of them require the wiimote to be used in a more traditional fashion.

I do think that Nintendo could retake the motion-activated controller market if they took Johnny Chung Lee's ideas and turned them into games, but they're losing the window since other systems are getting in on the action. With the head-tracking and the 3D viewing, you could have the player dodging bullets in bullet-time by actually moving left and right (or something like that).

Sony is already jumping into the Wii's territory with Heavy Rain, a PlayStation 3 exclusive that does make use of the PS3 controller's motion-control. If the player is trying to open a window that's jammed, they may have to jerk the controller straight up, mimicking the action you might take when opening a sticky window. There are also situations where you need to shake the controller along a particular axis to mimic other actions. It's not completely driven by controller motion though, so there are plenty of places that simply require an arbitrary button press.

Fingers crossed that Project Natal works well. I really think that if it does, it will rapidly be expanded to applications beyond gaming. The real wonder of Mistry's kit was the intuitive gesture recognition, used in an integrated way between devices. Suppose you had an ePaper device with a miniature Project Natal - style sensor on it, coupled with a home bristling with sensors and interactive gadgets which knew, amongst other things, their physical position (plus dumb objects with barcode-like markers which the sensors could pick up). You could then use gesture to provide information interaction with your entire environment. Pick up a can of beans and make the "more" gesture to order more beans. Tell the window to open with a sweep of your hand. Look up information about any object by gesturing from it to your hand held device. Lights would turn on and off around you as you wander around the house. A reading light comes on if you look at a book or a piece of paper and the ambient light is insufficient for reading, that kind of thing. And a well-tuned gesture recognition device is of course a far better way to interact with your PC/TV/whatever, than the clunky interface devices we use now.

Of course, the downside of such "intuitive" devices is that if it's too sensitive, you may trigger some events that you didn't actually intend, especially if you tend to gesture when you speak to people. Your movements may wind up to be misinterpreted by the sensor. You have to find the right balance so that average people won't get frustrated by lack of responsiveness if they don't execute a maneuver correctly and too sensitive where the controls trigger at the slightest hint of movement in the indicated direction. The ability to interpret human gestures with that level of precision is extremely complex and hard to do. But you can't market anything less to your average consumer. :rolleyes:

Since every person moves and behaves slightly differently, I think that every such motion-detection device would have to be individually calibrated to work with the user, which would probably make them prohibitively expensive once they reach a production-level competence. The most obvious failure point for the devices at that point would be the human operator (but of course, the customer is always right, so if the user's definition of "dark" toast and the machine's differ, the machine has to be the defective one... :P ).

On a related note, the comic "Dilbert" briefly had its own TV series, though I think it got canceled pretty quickly. It had some funny bits and one related to this discussion had to do with a voice-activated shower. Dilbert just had to state the temperature and the shower would adjust the temperature accordingly*. Naturally, Dogbert tried to mess with the settings, but Dilbert thought ahead and limited the response to his voice patterns. Unfortunately, Dogbert found a way around that limitation (taken from a quote on IMDb):

Dilbert: [Dilbert is in the shower, with a voice activated temperature control] The shower's calibrated to respond to my voice only.

Dogbert: Boy, you think of everything.

Dilbert: I'm cautious.

Dogbert: That's why you had training wheels until you were seventeen

Dilbert: I was fourteen.

[shower temperature goes to 14 degrees]

Dilbert: AAAAAAGH!

[Almost frozen in a block of ice]

Dilbert: 99! 99! 99!

[The temperature goes back to 99]

Then, of course, there's the problem that if we put barcodes on everything, we'll trigger the endtimes since everything will be required to have the mark of the beast! :o

* All temperatures in Fahrenheit.

Link to comment
Share on other sites

  • 0
On a related note, the comic "Dilbert" briefly had its own TV series, though I think it got canceled pretty quickly. It had some funny bits and one related to this discussion had to do with a voice-activated shower. Dilbert just had to state the temperature and the shower would adjust the temperature accordingly*.

* All temperatures in Fahrenheit.

Cool, I would implement that just for fun. Americans travelling in the rest of the world would get to find out what a shower at 99°C feels like. Bwahahahahaa!!!
Link to comment
Share on other sites

  • 0

Cool, I would implement that just for fun. Americans travelling in the rest of the world would get to find out what a shower at 99°C feels like. Bwahahahahaa!!!

Ouch. :(

But that's the underlying problem with getting computer systems to interact with humans. They respond to human commands without fail, provided the human gives the command correctly. A hard (and still unsolved) problem is understanding human intent. Dilbert doesn't intend to change the temperature by saying "fourteen," but the computer dutifully responds to all numbers it parses from the input stream. A possible fix would be to add a start command like, "Temperature: 99" or "99 degrees," but that adds complexity for the user and requires that they remember to use the correct sequence to activate the mechanism as they want.

The voice-activated shower is a relatively trivial example and can probably be easily fixed with something like what I suggested, but to do anything more complex with computer/human interaction, then things get complicated. The SixthSense thing is slightly different as it is mainly used for informational purposes and people can't really harm themselves by using it incorrectly. Though if it becomes a mainstream product, I'm sure that someone will find some way to injure/kill themselves using it. :rolleyes:

The other problem that makes this hard is deciding how to give commands to the device. In our digitally interactive society, we've decided that taking your fingers and sliding them apart means you want to zoom in on an object. Basically, you want your fingers to be treated as fixed points on object and the image should scale accordingly. It seems intuitive to us, as we've seen people doing it for years now with some of the interactive devices we've had already. But there are a plethora of other ways we could have implemented a hand-zoom feature and someone completely unfamiliar with the method may be at a loss when they try to zoom in. Someone has to decide what interface the user has available to them to operate on the device and what may seem intuitive and obvious to them, may be completely obtuse to the person standing next to them.

Take opening an interactive window as an example. It's stuffy in the house and I want to open the window, but it's cold outside, so I just want to open it a crack. I know what I want to do, but how do I get the computer that's controlling the window to understand that I don't want the window open all the way? Creating a simple "Open" command would probably be very easy. You look at the window and wave your hand in an upward motion. The sensors reading the direction of your gaze identify the window and they see your hand execute the "Open" motion. But how can you provide a degree of "Open" to the machine? I might decide that I want to crack the window using a flick of the wrist instead of a wave, but would that intuitive to everyone? :unsure:

To have these interactive houses become a reality, it seems like people would have to provide their own commands for specialized tasks like the one above. This probably wouldn't be a problem for the technically-minded, who would recognize the complexity of the task required to get a computer to understand your gestures seamlessly, but would the average user want to have to spend the time training the machine how to open a window or other mundane, yet complicated tasks that humans undertake every day without thought? Most people want an out-of-the-box (a very big box in this case :lol: ) ready product that just works the moment you plug it in. They don't want to have to teach it or anything. At least, that's how I see things right now. I guess the techies will get the computer-houses first and the mainstream will come on later...

Link to comment
Share on other sites

  • 0
...The other problem that makes this hard is deciding how to give commands to the device. In our digitally interactive society, we've decided that taking your fingers and sliding them apart means you want to zoom in on an object. Basically, you want your fingers to be treated as fixed points on object and the image should scale accordingly. It seems intuitive to us, as we've seen people doing it for years now with some of the interactive devices we've had already. But there are a plethora of other ways we could have implemented a hand-zoom feature and someone completely unfamiliar with the method may be at a loss when they try to zoom in. Someone has to decide what interface the user has available to them to operate on the device and what may seem intuitive and obvious to them, may be completely obtuse to the person standing next to them.

Take opening an interactive window as an example. It's stuffy in the house and I want to open the window, but it's cold outside, so I just want to open it a crack. I know what I want to do, but how do I get the computer that's controlling the window to understand that I don't want the window open all the way? Creating a simple "Open" command would probably be very easy. You look at the window and wave your hand in an upward motion. The sensors reading the direction of your gaze identify the window and they see your hand execute the "Open" motion. But how can you provide a degree of "Open" to the machine? I might decide that I want to crack the window using a flick of the wrist instead of a wave, but would that intuitive to everyone? :unsure:

To have these interactive houses become a reality, it seems like people would have to provide their own commands for specialized tasks like the one above. This probably wouldn't be a problem for the technically-minded, who would recognize the complexity of the task required to get a computer to understand your gestures seamlessly, but would the average user want to have to spend the time training the machine how to open a window or other mundane, yet complicated tasks that humans undertake every day without thought? Most people want an out-of-the-box (a very big box in this case :lol: ) ready product that just works the moment you plug it in. They don't want to have to teach it or anything. At least, that's how I see things right now. I guess the techies will get the computer-houses first and the mainstream will come on later...

You make a good point. We've been promised computer houses since the 1950s and still don't have them. Maybe that says that people don't really want them, and largely for reasons you just spelled out. Anyone can figure out how to open the window when it's just a matter of walking to the window and physically opening it. I do think that in time, technology within the home will become more integrated, but it will be a slow process and each element will have to be justified and perfected before anyone will want it.

A gesture-controlled house may be too much to work well, although I do think that gesture-controlled computer systems like in Minority Report will be a reality in short order (minus the holographic display of course). We may or may not do away with the keyboard but I reckon the mouse's days are numbered. A gesture recognition system that accurately picks up hand movements gives you in effect 10 [partially] independent pointers. I expect most two-handed users would use maximum of 4 in effect, with the index, ring and little fingers used to change behaviour like the right-click does. For example spreading out both hands palms down may cause a virtual keyboard to pop up for text input. This would work well without a 3D display, but would be a lot better with one. We may have to learn a gesture language for computer interaction but it would be worth learning, and I don't think it's a hard thing to learn. You say the sliding fingers apart gesture is arbitrary but I think it would be picked up by most people who had seen anyone else do it even once. Actually I'd say it's a marvellous piece of intuitive interaction and blends seamlessly with rotation, motion, throw away, and so on. Rather than being obtuse, it's the sort of thing that makes you think "wow, that's how things ought to work" the first time you see it. When we see these systems visualized in films it is generally clear what the user is doing even though we haven't read the user manual. That's because gesture is a much closer approximation to handling physical objects, which we understand intuitively, and is certainly more easily grasped than many existing GUI conventions.

Perhaps this gesture language will filter out into wider application as computer devices continue to merge with other hardware, so the language you use to interact with your PC may end up being the thing you use to control the central heating. But probably not the toaster. I guess some things just don't need to be that intelligent.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...