August 17, 2018
Adaptive Robotics at Barkley Canyon and Hydrate Ridge: Not My Normal Desk Job
Posted by larryohanlon
By Jenny Walker
This mission has taken me out of my depth, both personally and professionally. I’m a Computer Scientist, less than a year into my PhD at the University of Southampton. My job back on land is to develop new ways of automatically summarizing the information available in images gathered on an AUV dive, and while this is still my main focus, data handling at sea has been an entirely new experience for me.
For starters, this is my first time outside of Europe, so just getting to Portland, Oregon, felt like an adventure to me. However, that was only the beginning. Back home my desk is generally more stationary. Adjusting to working at sea has been the biggest challenge for me so far – despite the wonderfully calm weather, I have found long days of coding are not possible out here. This means my work requires a bit more forethought, more automation, and a lot less tinkering than I would like to do on land.
Even beyond personal challenges, data handling in the field is incredibly different from the normal desk job back home. For instance, in order to plug the Tuna-Sand Autonomous Underwater Vehicle (AUV) directly into the ship systems, I must first get kitted up with steel-toed boots, a hard hat, and a lifejacket, which is definitely a first for me. Doing so allows me to access the data from any PC on board connected to the ship’s Wi-Fi, where it’s then my job to run a script that copies the data across onto the main ship computer. That enables me to run a script I’ve been developing back at Southampton alongside my main PhD work for a few months, to correct the images for color imbalances. Once this is all complete, I can let the team know and they can get to work on using the images to create clustered datasets and 3D mosaics of the seafloor.
While at sea, I can also get started on processing the images myself. The machine learning system I have been developing will classify every single pixel as either belonging to a living thing, or not. I have got my challenge cut out for me.
The system is doing something called “segmentation” – splitting images into areas belonging to one class or another. Ideally, by the end of my PhD I will be able to extend it to classifying different kinds of creatures into their own groups, and with this I will be able to map out the distribution of different creatures and their sizes all across the surveyed areas. I like to imagine a world where as an AUV comes to the surface and we collect the data from it, we will be able to automatically summarize all of the relevant information in one beautiful map, showing the biodiversity and biomass distribution overlaid on stunning high-resolution 3D maps of the surveyed areas.
This post was originally published here.