Robot ethics and sustainability tech discussed at Sensors Converge

Questions involving technology ethics around robots and global sustainability surfaced at the opening of Sensors Converge 2022 in San Jose.

Keynote speakers challenged developers with multiple tech-related dilemmas, with one raising the thorny problems raised by robots and other artificial intelligence as it advances, potentially threatening jobs—or worse. Questions about sustainability were aired as well, including how sensors and other tech can help farmers grow better, safer crops.

Actor and environmentalist Adrian Grenier, now a Texas farmer, described his work to grow trees for timber amid heat and water supply needs while fending off deer and gophers, even as he has promoted ways to capture plastics from entering the world’s oceans. Bitcoin can be a means to help secure supply chains, cutting down on waste, he added.

actor and environmentalist adrian grenier
Adrian Grenier (Matt Hamblen)

Grenier is the creator of Earth Speed, a docu-series on environmental innovators, and co=founder of SHFT.COM, an online community for people to interact in a more sustainable way.

Farming for Grenier has raised his appreciation for the seemingly millions of considerations ordinary farmers face with the environment when trying to raise a successful crop, he said.  Working his farm has given him time to reflect on some of the big philosophical and political concerns that tech companies and developers face, even if their average days are consumed with MEMS and SoCs and AI software.

When asked by Questex CEO Paul Miller to look into a crystal ball for the future of farming and technology more generally, Grenier described himself as an optimist. “The spirit of a tool is determined by the user,” he said, paraphrasing an older saying that calls on scientists and engineers to weigh not only if a device can be made, but whether it can be made for the general good.

Kate Darling, a robot ethicist and research specialist at the MIT Media Lab, took some mild stabs at the ways tech companies are developing AI, as they work largely without the benefit of a larger political discourse over ways robots should behave, and –more importantly--how humans should regard robots in their midst.

“It’s an interesting time to be in this field when companies are starting to deploy and realize they’ve made a mistake,” she said. In one example she offered, a company might inadvertently re-create bias by building robotic assistants for offices that have female voices while a supervisor robot has a male voice. “A lot of companies are not even thinking about this…. Robots may tell other people your secrets, and I’m not sure people are aware of this.”

In urban areas, delivery robots use sidewalks and can get in the way of wheelchairs and people pushing strollers, raising questions of whether there should be regulations of the public space for robots. “All of it is happening at the local level and it’s a huge question,” she said. “The infrastructure of entire cities changed with cars or even before with horses. A lot of [technology] changes could set things in motion for a long time.”

Given the complexities of how robot behaviors and roles are developing she called for “more political effort—why not rethink this…The bad news is it will take more of a largescale effort. These are political decisions and we should get involved.”  Ultimately she said it might require more regulation of corporate profit motives which are behind questionable technology decisions.

Darling said the move in some warehouse operations tends to make human workers perform repetitive motions, almost as if to make the humans more like robots.

The direction for employers to consider might be “not to automate away these pesky workers, but how to use robots to do a better job…The future is not full automation; it’s one of human-robot interaction.”

When one audience member asked about the future of automated weapon systems used by militaries around the world, she noted that the U.S. military requires a human to be involved in making kill decisions, even though technology could be performing that role autonomously. “Militaries are worried about this all over the world,” she said.

While the United Nations is urging countries to sign a pledge banning killing by autonomous machines and 20 countries have gone along, she warned, “certain countries will absolutely refuse and that’s one of the scariest things.”  It’s possible automated systems could kill by mistake while using them could remove human responsibility and human intent from life and death decisions. “Take intent out and then no one is responsible for war crimes.”

RELATED: LaMDA IQ debate swirls over who (or what) is tone deaf: update