Imagine for a moment when you have cupping for your fingertips, unless you are currently on hallucinogens, in which case you shouldn’t imagine it. Each sucker has a different size and flexibility, which makes one finger great for sticking to a flat surface like cardboard, another more suitable for a round thing like a ball, another better for something more irregular, like a flower pot. On its own, each digit can be limited in the things it can handle. But together, they can work as a team to manipulate a range of objects.
This is the idea behind Ambi Robotics, a laboratory-developed startup that is now coming out of stealth mode with sorting robots and an operating system to operate such handling machines. The founders of the company want to make robots work in tasks that any rational machine should be terrified of: picking up objects in warehouses. What comes so easily to people – grabbing an object that isn’t too heavy – is actually a nightmare for robots. After decades of research in robotics laboratories around the world, machines are still a long way from our dexterity. But what they might need are suction cups for their fingertips.
Ambi Robotics was born from a University of California in Berkeley research project called Dex-Net which models how robots are to grip ordinary objects. Think of it as the robotic version of how computer scientists build image recognition AI. To teach machines to recognize, say, a cat, researchers must first create a database with lots of images containing felines. In each of them, they drew a box around the cat to teach the neural network: Look, here’s a cat. Once the network had analyzed a large number of examples, it could then “generalize”, automatically recognizing a cat in a new image that it had never seen before.
Dex-Net works the same, but for robotic grippers. Working in a simulated space, scientists create 3D models of all kinds of objects, then calculate where a robot must touch each of them to get a “sturdy” grip. For example, on a ball you would want the robot to grip around the equator, don’t try to pinch any of the poles. It seems obvious, but robots have to learn these things from scratch. “In our case, the examples are not images, but actually 3D objects with sturdy grip points,” says Ken Goldberg, roboticist at UC Berkeley, who developed Dex-Net and co-founded Ambi Robotics. “Then when we brought that into the network, it had a similar effect, that is, it started to generalize to new objects.” Even if the robot had never seen a particular object before, it could use its training with a galaxy of other objects to calculate the best way to enter it.
Consider the grotesque ceramic coffee mug you made in art class in elementary school. You might have chosen to shape it in an absurd way, but you probably remembered to give it a handle. When you handed it to your parents and they pretended to like it, they grabbed it by the handle – they had already seen their fair share of professionally made coffee mugs, and so they knew already how to grasp it. Ambi Robotics’ robot operating system, AmbiOS, is the equivalent of this earlier experiment, only for robots.
“As humans, we are able to really deduce how to handle this object, even though it doesn’t look like a mug that has ever been made before,” says Stephen McKinley, co-founder of Ambi Robotics. “The system can reason about what the rest of this object looks like, to know that if you’ve detected that part, you can reasonably assume it’s a decent grip.”