Wednesday, October 26, 2011

The following steps in robotics and computer vision: a behavior analysis, situational awareness

Devin Coldewey is a Seattle-based writer and photographer. He wrote for the TechCrunch network since 2007. Some posts, it would like you to read: the perils of externalization of knowledge | Generation I | Surveillant society | Select two | Frame war | Custom manifest | Our great sin his personal ????-coldewey.cc. ? Read More

header

We have already seen some exciting developments recently in the areas of Robotics and computer vision. They are not so academic, as you would expect: huge tech successes as the Roomba and Kinect relied as clever algorithms and software as they have on marketing and retail locations. So what's next for our increasingly intelligent cameras, Web cameras, televisions and phones?

I talked with Dr. Anthony Hoogs, head of the computer vision research at Kitware, the company, which is a frequent partner of DARPA, NIH and other acronyms you would likely admit. we discussed that could be reasonably expected in the next few years, progress in this area.

Kitware is a member of what we could reasonably call third-party tech, one not often in the spotlight. HOOGS Research Department relies on Government contracts and DARPA grants. We generally cover funded by the companies and products, with the backing of the enterprise or corporate R&D budgets, which are often more high profile.

We wrote earlier about the need to make sense of all the data is carried out on the battlefield, that from a camera in each platoon at each vehicle and look down on each aircraft. And then there's an enormous amount of footage produced by internal monitoring: public and private security cameras, traffic cameras, etc., the number of manufactured all of these devices and media network is too large to effectively be controlled by man. That's where the Kitware.

The next step in computer vision, says Dr. Hoogs, is that they work: behavior analysis. Just as something like Kinect must distinguish between reach for chip bag and any number of gestures in surveillance footage, it must be determined whether something is interesting or not. "Interesting" is an incredibly complex concept, but almost is not as simple as setting thresholds on the movement and shape.

What works for the purposes of military Kitware and observation, however, would be equally at home in our own devices. Reduction of thousands of hours of footage of security up to a few minutes the footage is only one way to apply the algorithms and software that they make. Permission analysis occurs in real time is a breakthrough, which must occur in order to bring it into the living room. I asked whether the image sensors to better and more widely available made it easier, but it feels that the main catalyst for actually, better processors. I had to know: more sensors mean more data, but not necessarily useful data. At the same time, algorithms are already effectively Lower fidelity can work faster and more often.

It's already got things like point and shoot cameras that are trying to apply it to the useful functions and eventually simply by adding more and more rooms face detection. However, the potential is huge. The end result is that each camera will become effectively a robot with knowledge of, the possibility of tracking and classification of every object in its environs, waving and smiling face to draw particular attention to the ray or improperly parked car.

On the issue of privacy is becoming a problem. So far, Kitware largely relied on public databases for their research: images and videos, which sets out the rule of law. But as I wrote in the Surveillant society, law and social dogma things that consistently lag behind technology, and this is no exception. Home security cameras should be lead off-site database of "trusted" person? It should be around cameras record often comers and goers, but flag of strange people and vehicles? And will people act differently when they know their TV watching "them? This is a complex question, and fortunately one doctor Hoogs gets to avoid. Their work allows technology without applying with caution.

Kitware also releases a significant part of their work publicly and widely used; other companies like PrimeSense also hopes to become the de facto standard for new interfaces as depth control and object recognition.


(image source)

I asked that we could reasonably be expected in real products for the next year or two. Dr. Hoogs believes that virtualization and augmented reality will be the next wave of consumer goods to use it. Your phone already knows where he is facing, what are the businesses nearby, and so on. Early records, like Google goggles and Layar show potential, but the processing and infrastructure need to be updated before it hits big time.

Big push comes when companies need to reduce the gap between scientific findings and product. That means to put an end to the development and characteristics, something purely researchers may have trouble with ("it is finished!"). But as Microsoft showed with Kinect, and many other companies have with Intelligent image manipulation tech, opportunities for a product as great as possible just to move on the ground.


View the original article here

No comments:

Post a Comment