Loading Now
×

How roboticists are thinking about generative AI

How roboticists are thinking about generative AI

How roboticists are thinking about generative AI


[A version of this piece first appeared in TechCrunch’s robotics newsletter, Actuator. Subscribe here.]

The topic of generative AI comes up frequently in my newsletter, Actuator. I admit that I was a bit hesitant to spend more time on the subject a few months back. Anyone who has been reporting on technology for as long as I have has lived through countless hype cycles and been burned before. Reporting on tech requires a healthy dose of skepticism, hopefully tempered by some excitement about what can be done.

This time out, it seemed generative AI was waiting in the wings, biding its time, waiting for the inevitable cratering of crypto. As the blood drained out of that category, projects like ChatGPT and DALL-E were standing by, ready to be the focus of breathless reporting, hopefulness, criticism, doomerism and all the different Kübler-Rossian stages of the tech hype bubble.

Those who follow my stuff know that I was never especially bullish on crypto. Things are, however, different with generative AI. For starters, there’s a near universal agreement that artificial intelligence/machine learning broadly will play more centralized roles in our lives going forward.

Smartphones offer great insight here. Computational photography is something I write about somewhat regularly. There have been great advances on that front in recent years, and I think many manufacturers have finally struck a good balance between hardware and software when it comes to both improving the end product and lowering the bar of entry. Google, for instance, pulls off some truly impressive tricks with editing features like Best Take and Magic Eraser.

Sure, they’re neat tricks, but they’re also useful, rather than being features for features’ sake. Moving forward, however, the real trick will be seamlessly integrating them into the experience. With ideal future workflows, most users will have little to no notion of what’s happening behind the scenes. They’ll just be happy that it works. It’s the classic Apple playbook.

Generative AI offers a similar “wow” effect out the gate, which is another way it differs from its hype cycle predecessor. When your least tech savvy relative can sit at a computer, type a few words into a dialogue field and then watch as the black box spits out paintings and short stories, there isn’t much conceptualizing required. That’s a big part of the reason all of this caught on as quickly as it did — most times when everyday people get pitched cutting-edge technologies, it requires them to visualize how it might look five or 10 years down the road.

With ChatGPT, DALL-E, etc., you can experience it firsthand right now. Of course, the flip side of this is how difficult it becomes to temper expectations. Much as people are inclined to imbue robots with human or animal intelligence, without a fundamental understanding of AI, it’s easy to project intentionality here. But that’s just how things go now. We lead with the attention-grabbing headline and hope people stick around long enough to read about machinations behind it.

Spoiler alert: Nine times out of 10 they won’t, and suddenly we’re spending months and years attempting to walk things back to reality.

One of the nice perks of my job is the ability to break these things down with people much smarter than me. They take the time to explain things and hopefully I do a good job translating that for readers (some attempts are more successful than others).

Once it became clear that generative AI has an important role to play in the future of robotics, I’ve been finding ways to shoehorn questions into conversations. I find that most people in the field agree with the statement in the previous sentence, and it’s fascinating to see the breadth of impact they believe it will have.

For example, in my recent conversation with Marc Raibert and Gill Pratt, the latter explained the role generative AI is playing in its approach to robot learning:

We have figure out how to do something, which is use modern generative AI techniques that enable human demonstration of both position and force to essentially teach a robot from just a handful of examples. The code is not changed at all. What this is based on is something called diffusion policy. It’s work that we did in collaboration with Columbia and MIT. We’ve taught 60 different skills so far.

Last week, when I asked Nvidia’s VP and GM of Embedded and Edge Computing, Deepu Talla why the company believes generative AI is more than a fad, he told me:

I think it speaks in the results. You can already see the productivity improvement. It can compose an email for me. It’s not exactly right, but I don’t have to start from zero. It’s giving me 70%. There are obvious things you can already see that are definitely a step function better than how things were before. Summarizing something’s not perfect. I’m not going to let it read and summarize for me. So, you can already see some signs of productivity improvements.

Meanwhile, during my last conversation with Daniela Rus, the MIT CSAIL head explained how researchers are using generative AI to actually design the robots:

It turns out that generative AI can be quite powerful for solving even motion planning problems. You can get much faster solutions and much more fluid and human-like solutions for control than with model predictive solutions. I think that’s very powerful, because the robots of the future will be much less roboticized. They will be much more fluid and human-like in their motions.

We’ve also used generative AI for design. This is very powerful. It’s also very interesting , because it’s not just pattern generation for robots. You have to do something else. It can’t just be generating a pattern based on data. The machines have to make sense in the context of physics and the physical world. For that reason, we connect them to a physics-based simulation engine to make sure the designs meet their required constraints.

This week, a team at Northwestern University unveiled its own research into AI-generated robot design. The researchers showcased how they designed a “successfully walking robot in mere seconds.” It’s not much to look at, as these things go, but it’s easy enough to see how with additional research, the approach could be used to create more complex systems.

“We discovered a very fast AI-driven design algorithm that bypasses the traffic jams of evolution, without falling back on the bias of human designers,” said research lead Sam Kriegman. “We told the AI that we wanted a robot that could walk across land. Then we simply pressed a button and presto! It generated a blueprint for a robot in the blink of an eye that looks nothing like any animal that has ever walked the earth. I call this process ‘instant evolution.’”

It was the AI program’s choice to put legs on the small, squishy robot. “It’s interesting because we didn’t tell the AI that a robot should have legs,” Kriegman added. “It rediscovered that legs are a good way to move around on land. Legged locomotion is, in fact, the most efficient form of terrestrial movement.”

“From my perspective, generative AI and physical automation/robotics are what’s going to change everything we know about life on Earth,” Formant founder and CEO Jeff Linnell told me this week. “I think we’re all hip to the fact that AI is a thing and are expecting every one our jobs, every company and student will be impacted. I think it’s symbiotic with robotics. You’re not going to have to program a robot. You’re going to speak to the robot in English, request an action and then it will be figured out. It’s going to be a minute for that.”

Prior to Formant, Linnell founded and served as CEO of Bot & Dolly. The San Francisco–based firm, best known for its work on Gravity, was hoovered up by Google in 2013 as the software giant set its sights on accelerating the industry (the best-laid plans, etc.). The executive tells me that his key takeaway from that experience is that it’s all about the software (given the arrival of Intrinsic and Everyday Robots’ absorption into DeepMind, I’m inclined to say Google agrees).



Source link