Robotic hand uses “skin” to hold chopsticks, meatballs, and bubble wrap just right | Technology News

From holding a ball to skillfully holding chopsticks, the new type of robotic hand developed by British scientists can grasp various objects just by moving the wrist and the feeling of “skin”.

The 3D-printed appendages are designed to be low-cost and energy-efficient, and are capable of complex movements, although they cannot use each finger independently.

Professor Fumiya Iida, National University of Japan CambridgeThe Bionic Robotics Lab says the goal is to “simplify the hand as much as possible.”

Most advanced robots capable of performing tasks like a human hand have fully motorized fingers, making them more difficult and expensive to produce.

But this cheaper alternative has been proven effective in more than 1,200 tests — including seeing how much pressure is applied to a given object.

More tech news:
AI-generated news reader debuts
China has another ChatGPT competitor

A robotic hand gracefully waved chopsticks.Photo: University of Cambridge
A robotic hand gracefully waved chopsticks.Photo: University of Cambridge

‘Robot skin’ helps judge how much force is needed

While you should instinctively know how to gently hold an egg without breaking it and ruining your breakfast, a robot needs training to recognize the proper force needed.

In this case, the researchers implanted sensors in the hand so it can sense what it’s touching.

It uses trial and error to learn which grips will work—starting with a ball and moving on to everything from peaches and bubble wrap to computer mice.

Study co-author Dr Thomas George-Thuruthel, now at University College London, said the sensor was “a bit like the skin of a robot”.

“We can’t say exactly what information the robot has,” he added, “but it could theoretically estimate where the object was grasped and with what force.”

The robot can also predict whether it will drop an object and adjust accordingly.

The researchers hope to further improve the robotic hand, such as adding computer vision capabilities and teaching it to use its surroundings to grasp a wider range of objects.

The results were published in the journal Advanced Intelligent Systems.

Source link