RoboLobsters to the rescue!
A few weeks ago I was invited to participate in a panel discussing social media for scientists at the University of Rhode Island. It was a fab day, led by the always-engaging Bora Zivkovich, and while the discussion was lively and interesting, my real jaw-hitting-the-floor moment came when my fellow panelist Dan Blustein introduced himself. I shall paraphrase:
“Hi I’m Dan Blustein, a grad student at Northeastern University, and I make robot lobsters.”
And it’s not just for fun. The robotic lobsters and lampreys that Dan and his colleagues work on are an incredible feat of bioengineering. Since I didn’t have much chance to talk to Dan after the panel, I got in touch with him after the fact and asked him a few questions about his work.
KP: The lobster and lamprey robots you’re making rely on biomimetic control. As I understand it, these systems rely on analogue rather than digital signals to transmit information much as an animal neuron would. Is that correct?
DB: The neurons that make up the electronic nervous systems that control our robots are not real neurons, they’re just simulated neurons. We’ve made nervous systems with two different types of neurons, one is analogue, the other is digital. The analogue neurons are made up of a series of circuits that calculate equations (we use the Hindmarsh-Rose model). The equations basically describe the dynamics of the ions that flow in and out of a neuron to produce action potentials and other neural signals. These are quite faithful to the biological neuron and they operate in real-time but for large networks of neurons, they take up a lot of space and can produce a lot of heat. We’re working on a VLSI (very-large-scale-integration) implementation to shrink these circuits down to fit large networks in small robot hulls. We also use another neuron simulation called the discrete-time map-based neuron model. The simulation doesn’t mimic everything that happens inside a neuron but it does mimic the types of action potential outputs that biological neurons produce. This is coded digitally and allows us to run large networks that we can quickly modify. We could run the first type of neuron simulation in code but computing the equations is a fairly intensive process so we run into delays on the robots.
One thing that is unique about our robots is how they move. Rather than motors or pneumatics, we use a muscle analog called nitinol that comes in wire form. This material is called a shape memory alloy and when you heat it up it contracts as muscle does. We use this contraction to move joints in our robots. To heat it up we drive pulses of current through the wire, which are driven by the neurons in our electronic nervous systems. The resistance of the wire causes it to heat up which makes it contract. When the pulses stop, the wire relaxes and stops moving the joint. This is how we biomimetically move our robots!
KP: The technology allows the robot to be autonomous. What has been the biggest challenge in the lab in terms of coordinating environmental sensing and behavioral output?
DB: The robot autonomy we have developed is based on neural networks that describe how an animal reacts to known sensory information in the environment. We run into challenges when the animal/robot is faced with novel environmental conditions. But we try to use this challenge to our advantage as we develop the electronic nervous systems. Let me try to explain. You can get a lobster to walk forward by moving it’s visual world from front to back across its eye (think lobster on a treadmill with moving walls). The bending of a lobster’s antennae will also stimulate forward walking and the lobster will walk upstream into water current. Normally if the lobster’s antennae bend one way, the visual world will move in a specific way and these stimuli are paired under normal conditions. However, in the lab we can subject lobsters and lobster robots to confusing sensory information so these two sensory cues are mismatched. For example, we can move the visual world as if the lobster is walking backwards but keep the water flow coming head on at the lobster as if it were walking forward. We can look at how the lobster reacts to get a sense of how these two sensory systems interact. By comparing that response to our robot we can see if our electronic nervous system is on track or if it needs to be adjusted.
KP: The major application of these robots appears to underwater exploration, both close to shore (RoboLobster) and in the open ocean (RoboLamprey). Are there other applications that you see in their future?
DB: The RoboLobster and RoboLamprey were originally funded for underwater mine detection and were designed to operate in tandem looking for mines on the ocean floor and floating in the water column. We could also see these robots being used for a range of other underwater tasks, including underwater search and surveys, environmental tracking, and the inspection of bridge pylons and dams.
KP: The circuits that you use to build biomimetic robots are modular in nature. Does this mean you can tailor the robots to fulfill specific missions or objectives?
DB: The idea is to build cheap, expendable robots that are easily customizable for a range of missions. If you need an infrared camera for checking leaks in a pipe, we can attach one. If you want your underwater robot to throw an ocean dance party, we’ll attach some flashing lights and a disco ball.
KP: The primary focuses at the Marine Science Center are naturally underwater exploits, but are there terrestrial (or even extraterrestrial) applications for biomimetic robots?
DB: Technically speaking one can make biomimetic robots for any type of environment in which life is found. Although we don’t have any extraterrestrials yet to mimic for outer space environments, that could change someday. We’re part of a team working on a project to build a robotic bee, a task that presents a range of challenges we don’t deal with underwater.
KP: On a more personal note, what would you like to see these robots accomplish in the near future?
DB: Scientifically I’d like to get to the point where the behavior of our robots is indistinguishable from their animal counterparts. That would mean we’re really getting at how nervous systems work. But really, I’d just like to see a robot animal zoo. It would be a great educational tool and besides, who hasn’t wanted to ride a robot camel at some point?
KP: Riding a RoboCamel would indeed be a dream come true…
If you would like to learn more about the RoboLobster and RoboLamprey project, head over to the lab website, and if you’d like to read more about what Dan does on a day-to-day basis check out his blog and follow him on Twitter @bloostein.