http://oneframeoffame.com/)–which asks fans to replace one frame of the band’s music video for the song “More or Less” with a capture from their webcams. In the project, a visitor to the band’s website is shown a single frame of the video and asked to perform an imitation in front of the camera. The new contribution is spliced into the video that updates once an hour.
“This turned out to be the perfect data source for developing an algorithm that learns to compute similarity based on pose,” explained Taylor, who obtained his doctorate in computer science from the University of Toronto. “Armed with the band’s data and a few machine learning tricks up our sleeves, we built a system that is highly effective at matching people in similar pose but under widely different settings.”
Up until recently, users needed a mouse and a keyboard, a touch-screen or a joystick to control a computer system. Researchers in Germany have now developed a new kind of gesture command system that makes it possible to use just the fingers of a hand.
Before a new vehicle rolls off the assembly lines, it first takes shape as a virtual model. In a cave — a room for the virtual representation of objects — the developers look at it from all sides. They “sit” in it, they examine and improve it. For example, are all the switches easy to reach? The developers have so far used a joystick to interact with the computer which displays the virtual car model.
In the future, they will be able to do so without such an aid — their hand alone is intended to be enough to provide the computer with the respective signals. A multi-touch interface, which h was developed by Georg Hackenberg during his Master’s thesis work at the Fraunhofer Institute for Applied Information Technology FIT, made this possible. His work earned him first place in the Hugo Geiger Prizes. “We are using a camera that, instead of providing color information, provides pixel for pixel the distance of how far this point is from the camera. Basically this is achieved by means of a type of gray-scale image where the shade of gray represents the distance of the objects. The camera also provides three-dimensional information that the system evaluates with the help of special algorithms,” explains Georg Hackenberg.
Hackenberg’s main work consisted in developing the corresponding algorithms. They ensure that the system is first able to recognize a hand and then able to follow its movements. The result: The 3D camera system processes gestures down to the movements of individual fingers and processes them in real time. Up to this point in time comparable processes with finger support could only detect how hands moved in the image level — they could not solve the depth information, in other words, how far the hand is from the camera system. For this reason it was often difficult to answer with which object the hand was interacting. Is it activating the windshield wipers or is it turning on the radio? Small movements of the hand, such as gripping, have so far been hardly possible to detect in real time — or only with great amounts of computing power. That is no problem for the new system.
Gesture commands are also interesting for computer games. A gesture recognition prototype already exists. The researchers want to improve weaknesses in the algorithm now and carry out initial application studies. Hackenberg hopes that the system could be ready for series production within a year, from a technical viewpoint. In the medium term, the researchers hope to further develop it such that it can be used in mobile applications as well, which means that it will also find its way into laptops and cell phones
This may have been a domestic dream a half-century ago, when the fields of robotics and artificial intelligence first captured public imagination. However, it quickly became clear that even “simple” human actions are extremely difficult to replicate in robots. Now, MIT computer scientists are tackling the problem with a hierarchical, progressive algorithm that has the potential to greatly reduce the computational cost associated with performing complex actions.
Leslie Kaelbling, the Panasonic Professor of Computer Science and Engineering, and Tomás Lozano-Pérez, the School of Engineering Professor of Teaching Excellence and co-director of MIT’s Center for Robotics, outline their approach in a paper titled “Hierarchical Task and Motion Planning in the Now,” which they presented at the IEEE Conference on Robotics and Automation earlier this month in Shanghai.
Traditionally, programs that get robots to function autonomously have been split into two types: task planning and geometric motion planning. A task planner can decide that it needs to traverse the living room, but be unable to figure out a path around furniture and other obstacles. A geometric planner can figure out how to get to the phone, but not actually decide that a phone call needs to be made.
Of course, any robot that’s going to be useful around the house must have a way to integrate these two types of planning. Kaelbling and Lozano-Pérez believe that the key is to break the computationally burdensome larger goal into smaller steps, then make a detailed plan for only the first few, leaving the exact mechanisms of subsequent steps for later. “We’re introducing a hierarchy and being aggressive about breaking things up into manageable chunks,” Lozano-Pérez says. Though the idea of a hierarchy is not new, the researchers are applying an incremental breakdown to create a timeline for their “in the now” approach, in which robots follow the age-old wisdom of “one step at a time.”
The result is robots that are able to respond to environments that change over time due to external factors as well as their own actions. These robots “do the execution interleaved with the planning,” Kaelbling says.
The trick is figuring out exactly which decisions need to be made in advance, and which can — and should — be put off until later.
Sometimes, procrastination is a good thing
Kaelbling compares this approach to the intuitive strategies humans use for complex activities. She cites flying from Boston to San Francisco as an example: You need an in-depth plan for arriving at Logan Airport on time, and perhaps you have some idea of how you will check in and board the plane. But you don’t bother to plan your path through the terminal once you arrive in San Francisco, because you probably don’t have advance knowledge of what the terminal looks like — and even if you did, the locations of obstacles such as people or baggage are bound to change in the meantime. Therefore, it would be better — necessary, even — to wait for more information.
Why shouldn’t robots use the same strategy? Until now, most robotics researchers have focused on constructing complete plans, with every step from start to finish detailed in advance before execution begins. This is a way to maximize optimality — accomplishing the goal in the fewest number of movements — and to ensure that a plan is actually achievable before initiating it.
But the researchers say that while this approach may work well in theory and in simulations, once it comes time to run the program in a robot, the computational burden and real-world variability make it impractical to consider the details of every step from the get-go. “You have to introduce an approximation to get some tractability. You have to say, ‘Whichever way this works out, I’m going to be able to deal with it,'” Lozano-Pérez says.
Their approach extends not just to task planning, but also to geometric planning: Think of the computational cost associated with building a precise map of every object in a cluttered kitchen. In Kaelbling and Lozano-Pérez’s “in the now” approach, the robot could construct a rough map of the area where it will start — say, the countertop as a place for assembling ingredients. Later on in the plan — if it becomes clear that the robot will need a detailed map of the fridge’s middle shelf, to be able to reach for a jar of pickles, for example — it will refine its model as necessary, using valuable computation power to model only those areas crucial to the task at hand.
Finding the ‘sweet spot’
Kaelbling and Lozano-Pérez’s method differs from the traditional start-to-finish approach in that it has the potential to introduce suboptimalities in behavior. For example, a robot may pick up object ‘A’ to move it to a location ‘L,’ only to arrive at L and realize another object, ‘B,’ is already there. The robot will then have to drop A and move B before re-grasping A and placing it in L. Perhaps, if the robot had been able to “think ahead” far enough to check L for obstacles before picking up A, a few extra movements could have been avoided.
But, ultimately, the robot still gets the job done. And the researchers believe sacrificing some degree of behavior optimality is worth it to be able to break an extremely complex problem into doable steps. “In computer science, the trade-offs are everything,” Kaelbling says. “What we try to find is some kind of ‘sweet spot’ … where we’re trading efficiency of the actions in the world for computational efficiency.”
Citing the field’s traditional emphasis on optimal behavior, Lozano-Pérez adds, “We’re very consciously saying, ‘No, if you insist on optimality then it’s never going to be practical for real machines.'”
Stephen LaValle, a professor of computer science at the University of Illinois at Urbana-Champaign who was not affiliated with the work, says the approach is an attractive one. “Often in robotics, we have a tendency to be very analytical and engineering-oriented — to want to specify every detail in advance and make sure everything is going to work out and be accounted for,” he says. “[The researchers] take a more optimistic approach that we can figure out certain details later on in the pipeline,” and in doing so, reap a “benefit of efficiency of computational load.”
Looking to the future, the researchers plan to build in learning algorithms so robots will be better able to judge which steps are OK to put off, and which ones should be dealt with earlier in the process. To demonstrate this, Kaelbling returns to the travel example: “If you’re going to rent a car in San Francisco, maybe that’s something you do need to plan in advance,” she says, because putting it off might present a problem down the road — for instance, if you arrive to find the agencies have run out of rental cars.
Although “household helper” robots are an obvious — and useful — application for this kind of algorithm, the researchers say their approach could work in a number of situations, including supply depots, military operations and surveillance activities.
“So it’s not strictly about getting a robot to do stuff in your kitchen,” Kaelbling says. “Although that’s the example we like to think about — because everybody would be able to appreciate that.”
Microsoft Chief Executive Officer (CEO) Steve Ballmer shares his vision about emerging technology and innovation, including the benefits of cloud computing at the C11 Cloud Summit in New Delhi. (Raveendran/AFP/Getty Images)
NEW YORK—“The cloud is about economies of scale. There are a few large players that have this ability to scale,” said Ben Fried, CIO of Google. He then added with emphasis, “Even the financial services firms on Wall Street can’t do the cloud.”
In a keynote interview at the first annual Bloomberg Enterprise Technology Summit, held in New York City on May 17, Mr. Fried went on to explain how cloud computing will empower companies, supply chains, and entire industries to “build by necessity.”
Whether information will be stored and accessed on a public or community cloud (off premises), a private cloud (on premises), or a hybrid cloud—a combination of the two—will depend on the needs of the client, firms, or groups of end-users. Thus, flexibility with the power and capacity to scale is another attribute that will determine how cloud computing will transform businesses, small and large, as we know it today.
“Dreamworks uses rendering farms at HP,” said Geoff Tudor, chief cloud strategist at Hewlett-Packard, in an ensuing panel discussion on “Operating in the Cloud.” He was referring to the Hollywood studio storing and accessing graphics and animated plates offsite in HP’s cloud.
“Mobility is key. The past few years we have seen a proliferation of devices and apps,” Mr. Tudor stated. “And yet, there will be a single control point for either an interior or exterior cloud.”
What is the Cloud?
The National Institute of Standards and Technology’s defines the cloud computing as “a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”
The term was derived from the cloud symbol used to represent the Internet in flow charts and diagrams.
Another way to look at the cloud as a datacenter is the cyclical process of water, in which water is knowledge or data. Water evaporates off the surface and rises to form clouds—data being stored in a giant system—and it falls again as precipitation, when data is either accessed or downloaded off the cloud.
Like the clouds in the sky with millions of variations, there are only seven different types of clouds.
For now, companies with big cloud servers that have the capacity to scale are Amazon, Microsoft, Google, HP, and IBM. Other firms, such as Verizon, are building their cloud infrastructure through acquisitions.
Strengths and Holes in the Cloud
“The move to cloud-based services will drop costs,” stated David Thompson, CIO of Symantec Corp., in his panel discussion with other industry peers. “The environment will be complex, hybrid mix of technologies. Companies will plan these IT migrations and technology services from in-house to the cloud. Transformation has happened before. It will happen again. But I see five areas of concern,” he said.
The first is “data spillage,” when information meant to stay in-house spills out to a public cloud. The next he pointed out, “Is a full breach by hacker groups.”
Imagine an app that uses social media to deliver emergency messages even when cell phone networks have stopped working during a natural disaster.
Or an app that can alert rescue workers when someone is alive under a collapsed building.
Those are close to becoming a reality thanks to a unique, weekend-long global event that brought together disaster professionals and volunteer software makers in the hopes of building a set of mobile and online emergency aid tools.
The teams at Random Hacks of Kindness Toronto (RHoK Toronto) were among some 1,000 people in 18 cities across six continents participating in the hacking marathon, or “hackathon,” that unites technologists and humanitarian experts in an effort to solve pressing problems. The Toronto event was held at Ontario Institute for Studies in Education (OISE) and teams worked on six projects. The goal was to complete the prototypes for the aid tools by Sunday afternoon.
Random Hacks of Kindness was founded in 2009 by Google, Microsoft, NASA, Yahoo and the World Bank.
“It’s unbelievable that the teams are able to create these mobile apps and online tools in less than 48 hours,” said Heather Leson, lead organizer of RHoK Toronto. “By dinner time Saturday, one team here had already programmed a working prototype!
“The best part of Random Hacks of Kindness is that no matter which teams win Toronto’s pitch competition, all the participants learn, mentor and share in their world. Plus, some projects will continue and maybe become fully built,” she said. SRC
Before going to bed, Bettnie LaRue checks herFacebook page, dating websites, email and news sites, blogs and maybe watches some TV.
Her iPhone serves as her alarm, so it’s right by her bed. She can send or answer texts from any of her 1,000 Facebook friends — or look up a movie time. She might play app game “Words With Friends” or read for hours on her Nook e-reader.
The only thing she doesn’t have an application for? Getting enough sleep — a common problem, say sleep specialists, for those who spend hours interacting with electronics.
aspersky Lab, a leading developer of secure content and threat management solutions, has been awarded a patent in Russia for an innovative system that provides anti-phishing protection. Patent №103643 covers a system that determines whether the domain name of a site corresponds with its IP address, thereby blocking cybercriminals attempts to redirect users to fake websites.
A typical phishing attack involves the cybercriminals distributing fake emails, purportedly originating from major online banking or social networking organizations. These emails usually request that users provide their confidential data and contain links to fake websites that mimic genuine ones. Users falling victim to such schemes generally find that the cybercriminals have used their social networking accounts to distribute spam and taken money from their online accounts. The cybercriminals may even try to extort money from users in return for control of their hijacked accounts.
To help prevent phishing attacks, it is common to use blacklists of fake websites or to compare URLs of web pages to which users are redirected with known and authentic web page URLs. The technologies above have their own shortcomings. For instance, comparing the name of a website with a blacklist is not effective against newly created fake addresses and white listing of authentic web page URLs will not pick up a spoofed IP address for a requested resource.
Kaspersky Lab’s new technology uses advanced techniques that quickly detect phishing websites, redirection to which is automatic and is hidden in the case of a farming attack. During such an attack, a user inserts the URL of an authentic website into their browser, but is surreptitiously redirected to a different IP address where a fake page is located. The technology created by Kaspersky Lab’s Aleksey Malyshev and Timur Biyachuev works by creating a duplicate, safe communication channel. IP addresses and domain names can thus be checked via this channel to ensure that they correspond to each other. As a result, the method provides users with real-time access protection, blocks phishing websites and helps to detect farming attacks. The new technology also enables databases of fake web page addresses that are used in anti-phishing protection modules to be updated promptly.
Currently patent offices in the USA, Russia, China and Europe are examining around one hundred innovative IT security technology patent applications from Kaspersky Lab. via Kaspersky.com
Video cameras and wireless technology have gotten so small, now developers at ZionEyez in Seattle are working on Eyez, a pair of glasses with a tiny embedded video camera that can continuously record everything you see in 720p, transmitting it wirelessly to social media sites for all to see.
ZionEyez is calling Eyez “a new revolution in social media technology,” allowing some exhibitionistic gadget lover to wear these video-shooting glasses that transmit their images via Bluetooth to an Eyez app on aniPhone or Android device. From there, the video would be streamed through wireless networks to video sites online, where it could all be viewed live.
Streaming live HD video is possible, but it’s still an awfully tall order for wireless data networks circa 2011, so there’s also 8GB of flash memory on board as well as a mini-USB port, allowing users to record the goings-on and then transfer them to a computer for editing and later broadcast.
This is fascinating. In a situation where transmission technology is dependable enough for smooth streaming, this could turn into an interesting performance-art project. Or imagine a celebrity wearing these glasses throughout the day. Maybe someday, everyone will record video of everything they do every day, and stream it live. That’s the ultimate social media.
Another stolen laptop has been recovered with the one-two punch of laptop recovery software — which took covertly took photos of the thief, a blog on Tumblr, and Twitter.
Joshua Kaufman, a software programmer, got his laptop back after setting up the terrifically named This Guy Has My MacBook tumblelog. Kaufman had reported the MacBook stolen to the police on March 21, but two months later still hadn’t heard anything about the case.
Screenshot of suspected thief logging into the victim’s Google account, recovered by This Guy Has My MacBook. (click to enlarge image)It was a good thing Kaufman had previously installed Hidden, laptop tracking software for Mac. With the software he was able to remotely gather evidence about the thief, including location information and photos — some with screenshots of the thief logging into his Google account.
It was only after Kaufman started tweeting about the blog and the case on Twitter, however, that things started moving along. The Twitter community re-tweeted his message thousands of times, apparently, and Good Morning America picked up the story. Finally the Oakland Police Department contacted Kaufman about following up on his case, and the laptop thief was arrested that evening. Kaufman got his MacBook back the next morning.