My research goals

Published:

I wanted to clarify to myself and others what some of my research goals are, and why I’m working on certain problems. The hope is that putting this online for the world to see will help challenge me to keep focused and working towards those goals—sort of like telling your friends that you’re going to quit smoking, or something like that.

My broad long-term goal is to build reliable, competent domestic robots: in other words, robots that can help you around the house with tidying up, folding laundry, and the kind of things you might now ask Alexa/Siri/Google Home for, like playing music and setting timers.

In addition to the interesting technical challenge of it, domestic robots are just something I personally would love to have. They could also make life a lot better for elderly people and people who need long-term care—if you can’t pay for a caregiver, and if your loved ones aren’t able to take on the role of caregiver, a robot might be an affordable alternative.

An important aspect of my goal is: I don’t want these robots to rely on the Internet—I want their AI to live on the robot, offline. There’s a few reasons for this.

1) Privacy. Yes, yes, I know there’s a lot of people working on things like computation on encrypted data (“some of my best friends work on data privacy!”), but barring some big breakthroughs, I’d prefer to just cut the Gordian Knot and avoid sending my data to the cloud altogether.

2) Latency/reliability. Suppose a robot lives with an elderly person, and the person is about to slip and fall. The robot needs to detect this and quickly move to keep the person from getting hurt. We might use a neural network to map the robot’s camera feed to a sequence of physical actions to take. If we store that neural network in the cloud, and the Internet cuts out, the robot won’t be able to act.

3) The Internet shouldn’t even be necessary. If it’s not something that inherently requires the Internet, like checking the weather forecast or buying groceries, I shouldn’t have to use the Internet to do it. My brain takes up the space of about a hard drive and runs on 20 Watts—I don’t have to store it in the cloud and run it on a supercomputer. It’s annoying that I need the Internet just to tell my phone to set a reminder.

And this last one admittedly isn’t a very good reason, but I want to be honest with myself:

4) I want to own all my stuff. I’m annoyed that less and less of my stuff consists of physical things that I definitely own, and instead is more virtual things in the cloud that I maybe kind of own or maybe am just renting. I want to own my data, for instance; I don’t want to have to worry about where it lives and who can see it. (But maybe this is just a bit of nostalgia, or a bit of the tin-foil-hat-wearing libertarian’s instinct to convert his digital money into a pile of gold and hide it under the bed.)

I suspect that, to make really good and reliable domestic robots happen, solving the AI problems—true spoken language understanding, fine-grained motor control, long-term planning—will require extremely large neural networks. I’m talking hundreds of trillions of parameters. For reference, the current biggest neural net reported in the literature has only 600 billion parameters.

Why do I say that? There’s lots of evidence. GPT-3 has 175 billion parameters—there’s theoretical and empirical reasons to believe that big models really are necessary—and that’s just for language understanding/generation: our robot will also need motor control, audio processing, vision, and haptics. The way we do AI now, each of these modalities is handled separately; I think ultimately we will need the modalities to be handled together, so that your robot doesn’t think a unicorn might have 4 horns—and so our models will need even more capacity, for understanding how the different modalities fit together. And if you like taking inspiration from human brains: ours have 100 trillion synapses—assuming that 1 synapse = 1 weight, and assuming that Nature is efficient, then we’ll need 100 trillion parameters to make AI that can do the range of things human brains can do.

Running 100-trillion-parameter networks on a domestic robot is going to be hard. Just running a 600-billion-parameter network on a supercomputer was a significant engineering challenge for Google. Will Moore’s Law alone get us there? Could we just wait until 2040, when a Raspberry Pi has 1 PetaFLOPS of processing and 100 TB of storage? Or do we need to consider fundamentally different hardware designs?

No, I’m not talking about matrix multiplication accelerators, which have been optimized to death already. I’m talking about data storage—way less sexy, but ripe for neural-net-specific optimizations. Right now, a 100 TB drive costs $40,000. The alternative, connecting up dozens of cheaper, smaller drives, might not be energy-efficient or space-efficient enough for a robot. Could we make denser, cheaper storage by taking advantage of the fact that—unlike the hard drives for storing your OS and bank account information—neural nets can withstand a little noise?

Whatever the hardware for domestic robot AI looks like, conditional computation—that is, not using all 100 trillion parameters of the neural network every time the clock ticks—is certainly going to be a part of the solution. That’s the focus of my PhD right now. My current hunch is that the hard mixture-of-experts is a good starting point, but not the final word in conditional computation. Just as inductive biases like translational invariance can make computer vision easier to learn, there are simple inductive biases from computer architecture and psychology which I think can make the hard mixture-of-experts easier to train. The fun part of AI is figuring out how little inductive bias we can get away with—and pilfering that bias from human brains and other sources of inspiration when we do need it.