Technology News: Paper Gets 'Smart' With Stenciled Sensor Tags

0

Paper gets 'smart' with drawn-on, stenciled sensor tags

 



In this example, the speed of the spinning tag on the pinwheel is mapped to onscreen graphics.
Credit: Eric Brockmeyer/Disney Research

A piece of paper is one of the most common, versatile daily items. Children use it to draw their favorite animals and practice writing the A-B-Cs, and adults print reports or scribble a hasty grocery list.

Now, connecting real-world items such as a paper airplane or a classroom survey form to the larger Internet of Things environment is possible using off-the-shelf technology and a pen, sticker or stencil pattern.
Researchers from the University of Washington, Disney Research and Carnegie Mellon University have created ways to give a piece of paper sensing capabilities that allows it to respond to gesture commands and connect to the digital world. The method relies on small radio frequency (RFID) tags that are stuck on, printed or drawn onto the paper to create interactive, lightweight interfaces that can do anything from controlling music using a paper baton, to live polling in a classroom.
"Paper is our inspiration for this technology," said lead author Hanchuan Li, a UW doctoral student in computer science and engineering. "A piece of paper is still by far one of the most ubiquitous mediums. If RFID tags can make interfaces as simple, flexible and cheap as paper, it makes good sense to deploy those tags anywhere."
The researchers will present their work May 12 at Association for Computing Machinery's CHI 2016 conference in San Jose, California.
The technology -- PaperID -- leverages inexpensive, off-the-shelf RFID tags, which function without batteries but can be detected through a reader device placed in the same room as the tags. Each tag has a unique identification, so a reader's antenna can pick out an individual among many. These tags only cost about 10 cents each and can be stuck onto paper. Alternatively, the simple pattern of a tag's antenna can also be drawn on paper with conductive ink.
When a person's hand waves, touches, swipes or covers a tag, the hand disturbs the signal path between an individual tag and its reader. Algorithms can recognize the specific movements, then classify a signal interruption as a specific command. For example, swiping a hand over a tag placed on a pop-up book might cause the book to play a specific, programmed sound.
"These little tags, by applying our signal processing and machine learning algorithms, can be turned into a multi-gesture sensor," Li said. "Our research is pushing the boundaries of using commodity hardware to do something it wasn't able to do before."
The researchers developed different interaction methods to adapt RFID tags depending on the type of interaction that the user wants to achieve. For example, a simple sticker tag works well for an on/off button command, while multiple tags drawn side-by-side on paper in an array or circle can serve as sliders and knobs.
"The interesting aspect of PaperID is that it leverages commodity RFID technology thereby expanding the use cases for RFID in general and allowing researchers to prototype these kind of interactive systems without having to build custom hardware," said Shwetak Patel, the Washington Research Foundation Entrepreneurship Endowed Professor in Computer Science & Engineering and Electrical Engineering.
They also can track the velocity of objects in movement, such as following the motion of a tagged paper conductor's wand and adjusting the pace of the music based on the tempo of the wand in mid-air.
This technique can be used on other mediums besides paper to enable gesture-based sensing capabilities. The researchers chose to demonstrate on paper in part because it's ubiquitous, flexible and recyclable, fitting the intended goal of creating simple, cost-effective interfaces that can be made quickly on demand for small tasks.
"Ultimately, these techniques can be extended beyond paper to a wide range of materials and usage scenarios," said Alanson Sample, research scientist at Disney Research. "What's exciting is that PaperID provides a new way to link the real and virtual worlds through low cost and ubiquitous gesture interfaces."

 https://s3-us-west-1.amazonaws.com/disneyresearch/wp-content/uploads/20160502234124/PaperID-A-Technique-for-Drawing-Functional-Battery-Free-Wireless-Interfaces-on-Paper-Paper.pdf


 

                                                       

Gentle Strength for Robots

0

Gentle strength for robots   

 

A soft actuator using electrically controllable membranes could pave the way for machines that are no danger to humans

 


Elastic machines: Membranes surrounding sealed, air-filled chambers can be used as actuators, facilitating risk-free contact between humans and robots. Compliant electrodes are attached to each side of the membrane and cause it to stretch when voltage is applied. The membranes are bistable, meaning that they can enclose two different volumes at the same air pressure. A membrane switches from its more compact state to its stretched state when voltage is applied to its electrodes. Even in the case of three or more linked, bubble-shaped chambers, one can be controlled in this way so that it inflates to a larger volume, thereby exerting force.
Credit: © Alejandro Posada
In interacting with humans, robots must first and foremost be safe. If a household robot, for example, encounters a human, it should not continue its movements regardless, but rather give way in case of doubt. Researchers at the Max Planck Institute for Intelligent Systems in Stuttgart are now presenting a motion system -- a so-called elastic actuator -- that is compliant and can be integrated in robots thanks to its space-saving design. The actuator works with hyperelastic membranes that surround air-filled chambers. The volume of the chambers can be controlled by means of an electric field at the membrane. To date, elastic actuators that exert a force by stretching air-filled chambers have always required connection to pumps and compressors to work. A soft actuator such as the one developed by the Stuttgart-based team means that such bulky payloads or tethers may now be superfluous.
Many robots have become indispensable, and it is accepted that they may be dangerous to humans in their workspace. In the automotive industry, for example, they assemble cars with speed and reliability, but are well shielded from direct contact with humans. These robots go through their motions precisely and relentlessly, and anyone who gets in the way could be seriously injured. Robots with soft actuators that cannot harm humans, on the other hand, are tethered by pneumatic hoses and so their radius of motion is restricted. This may be about to change. "We have developed an actuator that makes large changes in form possible without an external supply of compressed air," says Metin Sitti, Director at the Max Planck Institute for Intelligent Systems.
The new device consists of a dielectric elastomer actuator (DEA): a membrane made of hyperelastic material like a latex balloon, with flexible (or 'compliant') electrodes attached to each side. The stretching of the membrane is regulated by means of an electric field between the electrodes, as the electrodes attract each other and squeeze the membrane when voltage is applied. By attaching multiple such membranes, the place of deformation can be shifted controllably in the system.

Air is displaced between two chambers

The researchers are helped in this by the fact that their membrane material knows two stable states. In other words, it can have two different volume configurations at a given pressure without the need to minimize the larger volume. This is a little like letting the air out of an inflated balloon; it does not shrink back to its original size, but remains significantly larger. Thanks to this bi-stable state, the researchers are able to move air between a more highly inflated chamber and a less inflated one. They do this by applying an electric current to the membrane of the smaller chamber which responds by stretching and sucking air out of the other bubble. When the power supply is switched off the membrane contracts, but not to its original volume; it remains larger, corresponding to its stretched state.
"It is important to find suitable hyperelastic polymers that will enable strong and fast deformation and be durable," points out Metin Sitti. With this in mind, the team has tested different membrane materials and also used models to systematically record the behaviour of the elastomer in the actuator.
Thus far, the elastomers tested by Sitti's team have each had a mix of advantages and disadvantages. Some show strong deformation, but at a slow rate. Others work fast, but their deformation is more limited. "We will combine different materials with a view to combining different properties in a single membrane," says Sitti. This is, however, just one of the next steps he and his team have in mind. They also plan to integrate their actuator in a robot so that it can, for instance, move its legs but still give way if it happens to come across a human. Only then can machine-human interactions be risk-free.

Android 7.0

0

Android N update: release date, news and features


Android N update: release date, news and features
Update: Android N Developer Preview 2 is now available to download, and we have hints of a VR mode for Android phones. Google also confirmed the software will make it easier for phone makers to bring pressure-sensitive display tech to the market.
Here's everything we know about the forthcoming Android N update ahead of Google IO on May 18.
Android N is Google's phone and tablet operating system update that's been so thoroughly refined, the company is officially more than halfway through the English alphabet.
You can now download Android N Developer Preview form and test its new features that didn't make the cut in November's Android 6.0 Marshmallow launch alongside the Nexus 6P and Nexus 5X.
The shocker is that the company didn't wait to announcing Android N at Google IO 2016 next week. The reason behind this is it gives developers more time to tinker with the update, according to Google.
That's fantastic news for anyone who is brave enough to update their phone, tablet or streaming box with the unfinished build. We did just that to tell the rest of you what's inside.

Cut to the chase

  • What is it? The next version of Google's mobile OS, Android N
  • When is it out? Announced this month, but likely won't launch until October*
  • What will it cost? Free
*when - and if - you get it depends on what phone/tablet you own though

Will it be Android 7?

There's no guarantee this will be called Android 7 update - Google has sometimes opted to do smaller iterations for the updates. For example, Android 4 had 4.0 Ice Cream Sandwich, 4.1 Jelly Bean and 4.4 KitKat.
However, Samsung has mistakenly leaked out a hint it'll be called Android 7. Within its source code for its MultiWindow SDK 1.3.1, it reads "This version has been released with Android N(7.0) compatibility". Don't be surprised when Google finally announces the big number at Google IO next week.

Android N beta compatibility

Android N Developer Preview 2 is available for newer Nexus devices from the last year and a half, first and foremost means Google's star players, the Nexus 5X and Nexus 6P.
Android N
The giant Nexus 6 strong armed its way into the beta, while the weaker Nexus 5 didn't, at least not yet. Android N preview also works with Google Pixel C and its recently discontinued tablet brother Nexus 9 as well as the Nexus Player.
In a shocking twist, there's one random outlier in the Android N compability matrix: Sony Xperia Z3. Most non-Nexus phone aren't able to be a part of the beta and have to wait weeks if not months after the finished version makes its debut on new Nexus phones.

Multi-window support

True multitasking support is finally arriving as expected, and it's deservedly the highlight of Android N. You're going to be able to open up two apps at once on your Nexus phone or tablet.
Android N
It's a popular feature Samsung and LG phones have incorporated into their Android skins years ago, so it's nice (and about time) Google is including the same functionality in its own software.
Working in two apps at once and being able to resize the windows on-the-fly is joined by the ability to view videos in a picture-in-picture mode. YouTube isn't a waste of time if I'm also working, right?
Multi-window support could increase enterprise interest in Android tablets and the Pixel C. It's a bet that Apple recently made when it launched a similar split-screen and picture-in-picture feature for iOS 9.
You may not have to wait until the Android N update to take advantage of pure Android multitasking. It's rumored to be making an early debut in Android 6.1 in June.

New Android N features rumors

We've tested out a bunch of existing Android N features below, but there's also the potential of exciting new tools coming to the update, specifically Android VR.
Android N
A bruised menu for VR helper services in Android N Developer Preview 2 and an equally buried release note for "Android VR" in Unreal Engine 4.12 beta hints at a big push for a Google Cardboard successor.
Then there's the exciting notion that Android N could make it easier for phone manufacturers to add 3D Touch-like screen technology on future Android devices, and the not-so-exciting possibility that the app drawer could be going away. Let's quickly move on to what we know for sure, though.

Direct Reply Notifications

You won't have to navigate away from your current window (or, now, windows) just to answer an incoming message. You can just reply within the notification that appears at the top of the screen.
Android N
It worked well enough for the iPhone and iPad when the same idea made its debut with iOS 8 under the name Quick Reply. But Apple's approach to messages worked strictly with its iMessage app.
Google is opening up Direct Reply Notifications beyond Hangouts, and that could mean popular apps like WhatsApp could take advantage of this convenient inline messaging feature.

New quick settings menu

Google is adding a new quick settings menu to the notifications shade you pull down from the top. It's a lot like the one Samsung, LG and every other Android manufacturer seems to use.
Android N
Sure, Google stock Android software has had switches for Wi-Fi, Bluetooth, Airplane mode and so forth, but it required pulling the notifications bar down a second time to reveal the quick settings menu.
Now the quick settings toggles are here as soon as you gesture downward once to see notifications. The best news is that all of the buttons small and unobstructive. It leaves room for notifications to flourish.
Android N
Longtime Nexus users will also be happy to hear that the quick settings switches can be sorted to your liking, much like they can on other Android phones. You won't need the System UI Tuner to meddle.
For example, I often use MiFi more than Airplane Mode, so Mobile Hotspot icon get promoted to be one of the five icons along the top of the initial quick settings on my Nexus 6P.
That little airplane icon is still there for my takeoff and landings needs, but it got the bump to the second swipe menu. Sorting is finally up to you, which is really what Android is all about.

Bundled notifications

Google's not done with the way Android N changes notifications. It also announced that notification cards will be grouped together if they're from the same app.
All messages from a specific messaging app, for example, are bundled together in the notification shade. These grouped alerts can then be expanded into individual notifications using a two-finger gesture or tapping the all-new expansion button.
This is basically the opposite of what Apple did in the jump from iOS 8 to iOS 9, switching from grouping them by app to lining them up chronologically. We'll see which method works best this autumn.

Doze Mode 2.0

One of the (literal) sleeper hits of Android Marshmallow has been Doze Mode, Google's crafty way of saving battery life whenever your device is stationary. It's amounts to a deep standby mode.
Android N
Android N is going to step up the company's energy-saving software techniques by expanding Doze Mode so that it thoroughly limits background tasks whenever the screen is turned off.
That's ideal for throwing a phone in your pocket or your tablet in a backpack, and then retrieving it the next day or next week without having to recharge it right away. Your "I can't even" face when you pick up your dead Nexus phone the next morning will be a thing of the past.

Other features

Google has confirmed the new "Launcher Shortcuts" feature that debuted in the second beta for Android N is ready for pressure sensitive display technology.
It will make it easier for Android manufacturers to bring 3D Touch-like technology to Android handsets as it's baked directly into the OS.
Rumor has it Android N could bring a specific VR mode to your Android phone too. The settings menu includes a "VR helper services" section that allows certain apps to be registered as a "VR Listener" and a "VR Helper".
What exactly that means for virtual reality on your Android phone isn't clear yet, but it's likely to have integration with Google Cardboard.

The Android N name

History has taught us that Android N is going to be named after a delicious treat, but Google hasn't told us which one it is yet. It usually doesn't confirm the full name until later in the year.
Android N
For now, we're testing out the Developer Preview on a first-letter basis. It's very informal. We also don't exactly know if it'll be Android 7.0 or not either. It's currently unclear. Let's not forget Google's dabble with the number four with Android 4.0 Ice Cream Sandwich, 4.1/2/3 Jelly Bean and 4.4 KitKat.
It has reverted back to type with 5.0 Lollipop and 6.0 Marshmallow, but Google always has the option to chuck in a curve ball once in awhile.

Android N release date

The official Android N launch date is likely several months away, however, we fully expect to see a new Developer Preview and additional features when Google IO 2016 keynote happens next week.
Android N
Google's annual conference takes place May 18-20, 2016. But then there tends to be several months in between the IO announcement and when the new version of Android actually rolls out.
That means you probably won't be able to download the final version before October - and even then it's likely that only Nexus-branded phones and tablets will be able to install it that month.
Your brand new Samsung Galaxy S7 and Galaxy S7 Edge will have to wait. Manufacturers and carriers have to rework their own version of the software and push it out to users - and that can take months.

What phones will get Android N?

Best Android
If you've got a recent flagship phone, you should be in luck. Most phone and tablet makers try and push the software to phones and tablets that are less than two years old, but it may be quite a wait.
Samsung, Sony, LG and HTC are usually quite fast at getting the update to your phone, as is Motorola. Some other manufacturers can take a little while to release it, though.
Each manufacturer can take time to tweak the updates. Take Android Marshmallow for example, some phones still don't have the update, even though it's been out for five month... five very long months, as February was 29 days long since it's not a leap year.
If you want the latest software, it's best to get a Nexus device, as the newest version of Android will always be pushed to that first. Newer Nexus owners are currently able to test out Developer Preview 1.
Google has stressed that the features involved in this alpha version of Android N are only the beginning. Expect to see more front-facing features at Google IO in May.

Robotics News: Shape-Shifting Modular Interactive Device

0

Shape-shifting modular interactive device unveiled

 

Cubimorph is a modular interactive device that holds touchscreens on each of the six module faces and that uses a hinge-mounted turntable mechanism to self-reconfigure in the user's hand. One example is a mobile phone that can transform into a console when a user launches a game.
 

 A prototype for an interactive mobile device, called Cubimorph, which can change shape on-demand will be presented this week at one of the leading international forums for robotics researchers, ICRA 2016, in Stockholm, Sweden 


The research led by Dr Anne Roudaut from the Department of Computer Science at the University of Bristol, in collaboration with academics at the Universities of Purdue, Lancaster and Sussex, will be presented at the International Conference on Robotics and Automation (ICRA).
There has been a growing interest toward achieving modular interactive devices in the human computer interaction (HCI) community, but so far existing devices consist of folding displays and barely reach high shape resolution.
Cubimorph is a modular interactive device that holds touchscreens on each of the six module faces and that uses a hinge-mounted turntable mechanism to self-reconfigure in the user's hand. One example is a mobile phone that can transform into a console when a user launches a game.
The modular interactive device, made out of a chain of cubes, contributes towards the vision of programmable matter, where interactive devices change its shape to fit functionalities required by end-users.
At the conference the researchers will present a design rationale that shows user requirements to consider when designing homogeneous modular interactive devices.
The research team will also show the Cubimorph mechanical design, three prototypes demonstrating key aspects -- turntable hinges, embedded touchscreens and miniaturisation and an adaptation of the probabilistic roadmap algorithm for the reconfiguration.
Dr Anne Roudaut, Lecturer from the University's Department of Computer Science and co-leader of the BIG (Bristol Interaction Group), said: "Cubimorph is the first step towards a real modular interactive device. Much work still needs to be achieved to put such devices in the end-user hands but we hope our work will create discussion between the human computer interaction and robotics communities that could be of benefit to one another other."
Video: https://www.youtube.com/watch?v=jhutb0k1WDM

Robobird to Make Its First Flight at Airports

1

Robobird to make its first flight at airports


FULL STORY

Nico Nijenhuis and the Robobird.
Credit: Image courtesy of University of Twente
 

 University of Twente's Robobird will make its first flights at an airport location in February. Weeze Airport in Germany, just across the Dutch border near Nijmegen, will serve as the test site for this life-like robotic falcon developed by Clear Flight Solutions, a spin-off company of the University of Twente. The Robobird is designed to scare away birds at airports and waste processing plants.

 'Finally, this is a historic step for the Robobird and our company', says Nico Nijenhuis, Master's student at the University of Twente and the CEO of Clear Flight Solutions. 'We already fly our Robirds and drones at many locations, and doing this at an airport for the first time is really significant. Schiphol Airport has been interested for many years now, but Dutch law makes it difficult to test there. The situation is easier in Germany, which is why we are going to Weeze.'

 Training the robot and human operators

 Clear Flight Solutions is benefiting from the more relaxed rules at Weeze, as well as the relatively limited amount of air traffic there. The airport handles around 2.5 million passengers annually, most of whom come from the Netherlands. Schiphol Airport handles 55 million passengers annually. In addition to testing the Robird, the company will also train the Robird's 'pilot' and 'observer' (who watches other air traffic). 'If you operate at an airport, there are a lot of protocols that you have to follow', says Nijenhuis. 'You're working in a high-risk area and there are all kinds of things that you need to check. We use the latest technologies, but the human aspect also remains crucial.'


No option but to cross the border

Nijenhuis thinks it is a shame that the situation at Schiphol Airport is so difficult, but he also says that a lot of work is currently being done to accommodate the drone sector in the Netherlands. 'Airports are very important to us, however the law in the Netherlands means that this kind of testing is very sensitive. There are major differences with countries like Germany and France. It is unfortunate to see that so much activity in the drone sector is being drawn away from the Netherlands. Fortunately, our politicians are starting to understand this. Meetings between the Ministry of Infrastructure and Environment and the drone sector are going well, so I'm very happy about that. Finally we are all talking about the rules together. At the moment, it is often the case that professionals are not allowed to do anything, while amateurs are can do whatever they want. Luckily, that situation is changing. The government has also launched an awareness and information campaign. That is another positive development.'

The Robobird

The cost of bird control at airports worldwide is estimated in the billions, and does not consist only of material damage, as birds can also be the cause of fatal accidents. Birds worldwide also cause damage running into billions in the agrarian sector, the waste disposal sector, harbours, and the oil and gas industry. A common problem is that since birds are clever they quickly get used to existing bird control solutions, and simply fly around them. The high-tech Robobird, however, convincingly mimics the flight of a real peregrine falcon. The flying behaviour of the Robobird is so true to life that birds immediately believe that their natural enemy is present in the area. Because this approach exploits the birds' instinctive fear of birds of prey, habituation is not an issue.

Bee Model Breakthrough for Robotics

0

Bee model could be breakthrough for robot development

 

Computer model of how bees avoid hitting walls could help autonomous robots in situations like search and rescue


FULL STORY

A visualization of the model taken at one time point while running. Each sphere represents a computational unit, with lines representing the connection between units. The colors represent the output of each unit. The left and right of the image are the inputs to the model and the center is the output, which is used to guide the virtual bee down a simulated corridor.
Credit: The University of Sheffield
 
 

Scientists at the University of Sheffield have created a computer model of how bees avoid hitting walls -- which could be a breakthrough in the development of autonomous robots.

Researchers from the Department of Computer Science built their computer model to look at how bees use vision to detect the movement of the world around them and avoid crashes.
Bees control their flight using the speed of motion -- or optic flow -- of the visual world around them, but it is not known how they do this. The only neural circuits so far found in the insect brain can tell the direction of motion, not the speed.
This study suggests how motion-direction detecting circuits could be wired together to also detect motion-speed, which is crucial for controlling bees' flight,
"Honeybees are excellent navigators and explorers, using vision extensively in these tasks, despite having a brain of only one million neurons," said Dr Cope, lead researcher on the paper.
"Understanding how bees avoid walls, and what information they can use to navigate, moves us closer to the development of efficient algorithms for navigation and routing -- which would greatly enhance the performance of autonomous flying robotics," he added.
Professor James Marshall, lead investigator on the project, added: "This is the reason why bees are confused by windows -- since they are transparent they generate hardly any optic flow as bees approach them."
Dr Cope and his fellow researchers on the project; Dr Chelsea Sabo, Dr Eleni Vasilaki, Prof essor Kevin Gurney, and Professor James Marshall, are now using this research to investigate how bees understand which direction they are pointing in and use this knowledge to solve tasks.
 

Five-Fingered Robot Hand Learns to Get a Grip

0

This 5-fingered robot hand learns to get a grip on its own


This five-fingered robot hand developed by University of Washington computer science and engineering researchers can learn how to perform dexterous manipulation -- like spinning a tube full of coffee beans -- on its own, rather than having humans program its actions.
Credit: University of Washington
Robots today can perform space missions, solve a Rubik's cube, sort hospital medication and even make pancakes. But most can't manage the simple act of grasping a pencil and spinning it around to get a solid grip.
Intricate tasks that require dexterous in-hand manipulation -- rolling, pivoting, bending, sensing friction and other things humans do effortlessly with our hands -- have proved notoriously difficult for robots.
Now, a University of Washington team of computer science and engineering researchers has built a robot hand that can not only perform dexterous manipulation but also learn from its own experience without needing humans to direct it. Their latest results are detailed in a paper to be presented May 17 at the IEEE International Conference on Robotics and Automation.
"Hand manipulation is one of the hardest problems that roboticists have to solve," said lead author Vikash Kumar, a UW doctoral student in computer science and engineering. "A lot of robots today have pretty capable arms but the hand is as simple as a suction cup or maybe a claw or a gripper."
By contrast, the UW research team spent years custom building one of the most highly capable five-fingered robot hands in the world. Then they developed an accurate simulation model that enables a computer to analyze movements in real time. In their latest demonstration, they apply the model to the hardware and real-world tasks like rotating an elongated object.
With each attempt, the robot hand gets progressively more adept at spinning the tube, thanks to machine learning algorithms that help it model both the basic physics involved and plan which actions it should take to achieve the desired result.
This autonomous learning approach developed by the UW Movement Control Laboratory contrasts with robotics demonstrations that require people to program each individual movement of the robot's hand in order to complete a single task.
"Usually people look at a motion and try to determine what exactly needs to happen --the pinky needs to move that way, so we'll put some rules in and try it and if something doesn't work, oh the middle finger moved too much and the pen tilted, so we'll try another rule," said senior author and lab director Emo Todorov, UW associate professor of computer science and engineering and of applied mathematics.
"It's almost like making an animated film -- it looks real but there was an army of animators tweaking it," Todorov said. "What we are using is a universal approach that enables the robot to learn from its own movements and requires no tweaking from us."
Building a dexterous, five-fingered robot hand poses challenges, both in design and control. The first involved building a mechanical hand with enough speed, strength responsiveness and flexibility to mimic basic behaviors of a human hand.
The UW's dexterous robot hand -- which the team built at a cost of roughly $300,000 -- uses a Shadow Hand skeleton actuated with a custom pneumatic system and can move faster than a human hand. It is too expensive for routine commercial or industrial use, but it allows the researchers to push core technologies and test innovative control strategies.
"There are a lot of chaotic things going on and collisions happening when you touch an object with different fingers, which is difficult for control algorithms to deal with," said co-author Sergey Levine, UW assistant professor of computer science and engineering who worked on the project as a postdoctoral fellow at University of California, Berkeley. "The approach we took was quite different from a traditional controls approach."
The team first developed algorithms that allowed a computer to model highly complex five-fingered behaviors and plan movements to achieve different outcomes -- like typing on a keyboard or dropping and catching a stick -- in simulation.
Most recently, the research team has transferred the models to work on the actual five-fingered hand hardware, which never proves to be exactly the same as a simulated scenario. As the robot hand performs different tasks, the system collects data from various sensors and motion capture cameras and employs machine learning algorithms to continually refine and develop more realistic models.
"It's like sitting through a lesson, going home and doing your homework to understand things better and then coming back to school a little more intelligent the next day," said Kumar.
So far, the team has demonstrated local learning with the hardware system -- which means the hand can continue to improve at a discrete task that involves manipulating the same object in roughly the same way. Next steps include beginning to demonstrate global learning -- which means the hand could figure out how to manipulate an unfamiliar object or a new scenario it hasn't encountered before.