Towards a sustainable bot ecosystem

Everest Pipkin
7 min readMar 1, 2017

This is a transcript of a talk given at The Electronic Literature Organization Conference at the University of Victoria, June 2016.

So- I was originally going to talk to you about selfhood and bots. This is something I’ve written about before (and if you’d like to hear more about it, those essays are online); the long history of granting non-humans humanity, from icon paintings to automata. And yes, such a tradition has some obvious relevance to how we relate to contemporary bots, particularly chatbots, who often exist on social media and are often described as a ‘they’ (or, sometimes, perhaps oddly- ‘she’ or ‘he’) rather than an ‘it’. I’d planned to talk about what the profile page does for a machine in the eyes of humans with similar profile pages and how often the media describes current AI as entities with continued existence, rather than systems that spend most of their time off, nonbeing. I was going to talk about golems and other folkloric enchanted objects that gain sentience through being tasked; as well as fakes- humans masquerading as bots, like the mechanical turk.

But as I was arranging these thoughts into presentation format, I started having doubts. Why talk about entity? Why talk about humanity? This is perhaps the most obvious aspect of bots; before one sees their repetition, without reading their source code, it is pretty clear; they are cute little machines, and they are made to be like me.

I’m not sure if this is right.

For all of their front-facing social media presence and sometimes even learned ability to a hold conversation, bots are nothing like me. My gifting selfhood onto them is projection and anthropomorphism, and although cute, there is perhaps a danger here. In the same way that research scientists urge against ascribing human emotions to animals whose cognition we can not fully understand, seeing bots as tiny little automatic people, who tell jokes or draw pictures, is a massive simplification of what it is that they are doing.

I don’t want to over-exaggerate the complication of bots; they are simple, remarkably so. Nothing in this space even approaches the complexity some of the world’s most basic invertebrates. But, the analog is not wise- to place a bot at the intelligence level of a worm is to judge it by the rules of worms. And they are not worms. Programming a creature to be intelligent like us is a great oversight for what we can learn from things that are not like us.

Human cognition- although vastly textured and remarkably nuanced, as well as flexible- requires constant attention. One must refocus from task to task, and this makes us untrustworthy witnesses at best (although excellent at experiencing). Not so with bots; they have the capacity to listen at thousands of points at once, missing very little to nothing, logging data at unfathomable rates.

So, yes, machines are not like us; we know this. But are there other useful analogs? If modeling bots after humans is silly at best, and detrimentally myopic at worst- what can we learn from?

As researcher and digital storyteller Tim Wright proposed in recent conversation with Matt Locke, if there is any analog to the type of conversation bots employ, it might be in plants. Like plants, bots are able to monitor many streams of data, without rest or sleep. Like plants, bots often operate in non-centralized, node-based structures. Like plants, bots demand a networked environment to succeed.

Of course, pushing plants onto bots is easily another anthropomorphism, botani-morphism (if you will), and I will be the first to admit the comparison may be a stretch. But it is perhaps valuable to see what aspects of plant systems can apply to bots- and whether that is a useful model going forward.

The language we use now to describe bots has nods in this direction- we talk about ‘bot gardens’, walled spaces not connected to social media at large where bots can run rampant, or ‘bots in the wild’, spambots and others not constructed by artists and technologists, but perhaps still proving interesting- or at least a valuable part of the landscape. We even talk about tending our bots, about new crops of bots, about bot communities.

If one were to replace the word bot with the word plant, the question how do we build a ‘good plant’ suddenly seems a bit silly. Rather, we would say; how do we build a good environment in which this plant can grow.

Of course, measuring the health of natural systems needs to happen on their own terms, terms sometimes alien to a layman’s understanding of what is ‘healthy’ or ‘normal’. It is easy for us to see a secluded grove of cypress trees without understanding that in this particular place, perhaps, cypress is invasive and harmful. It takes deep study to learn the needs of each organism living in a place, as well as the ecosystem that supports them.

However, in human-made spaces like the internet, health is a metric that is easier to gauge. The internet (despite the massive amounts of robot traffic) is innately human, and we need not use technical metrics like ‘number of nodes supported’ or ‘density of connection’ or even speed to gauge its health. Instead, we are at liberty to look for human problems; like diversity of internet users, capacity for access outside of Europe, North America, and East Asia and along demographic and economic lines.

Much of my bot work, like much of the bot work of others, is on Twitter. Although Twitter can be variably supportive, connective, and useful, it also allows for a terrifyingly large amount anonymous abuse, harassment, and cruelty. This environment has at times been described as toxic; a generally-useful phrase when gauging the health of an ecosystem.

Plants communicate through a system of fungal networks (which bear some structural resemblance to our physical internet), sharing resources with seedlings and warning other plants of insect infestations. Although the field is under-researched, it seems that these systems of communication are almost universally mutually beneficial. As happens so often in the natural world, even situations that appear one-sided or parasitic often serve a function for both sides. For example, when some plants have their leaves chewed by insects, they release a chemical warning that is ‘picked up’ by surrounding plants. A traditional model of reward would posit that only the plant’s relatives were the intended recipients of such a warning, and any other plant in the know is more-or-less spying. However, a circle of amped-up chemical defenses triggered by this warning helps keeps local bug infestations at bay, and may ultimately save the original plant, as well as keep the bugs from overpopulating.

This type of environmental control that benefits relatives and non-relatives alike is rare for animals, but is perhaps a necessary survival strategy when one is rooted in place. Regardless, natural systems are eminently good at self-regulation. And it begs the question; are our bots giving back to their environment in a similar way?

On a functional level, of course, yes- every server acts as a transit node, and our machines are busy shuffling packets around the internet.

But the question is also an emotional one; are our bots benefiting the human environment of social spaces? How are they treating the humans who live in these systems? Are we using them to their best capacity; monitoring these spaces and listening for abusive language? What does harm look like in the spaces we’ve built? Is my bot hurting people? Is it being cruel? Is it using language I do not approve of? Can it police its own language? Can it guide the language use of others (a tireless leading by example)? Can it learn what is harmful, and not to repeat those errors? Can it inspire new bots? Can it inspire new bot-makers?

If we are making bots (and if bots are, for the sake of this conversation, plants), then we are gardeners. And something that every gardener knows is that you cannot just tend to a single plant.

Yes, you can plant a flower, and prune it and fertilize it and water it every day. And it might bloom, and you can pick those flowers. But if the soil it is planted in is toxic- if it becomes infested with aphids, or rot- if the plants around it are causing it harm- then it will not thrive.

As a gardener, one does need to just tend to the tomatoes- but one also needs to tend to the system they live in. To not water on days it rains- the cover the plants when it is cold- to plant a border of marigolds to keep the spring caterpillars away. Tending to the tiny system of a backyard, or the bigger one, of a field or forest, or the huge one of the internet is a massive task. It requires a structural, systems knowledge of how component parts fit together, how humans exist in this system, and how they may grow and change without spiraling out of control.

When I was writing this talk the first time, I wanted to talk about selfhood, and entity, why we ascribe these things to bots and what it does to them. What I realized in the writing is that fierce individualism does much the same thing to bots as it does to humans; it pulls us from our community, excuses us from answering to systems problems, and poses us as colonialists against wildness. But the mythos of of the lone ranger out against the void is not helpful- it is isolating.

Thinking of bots as plants is perhaps most useful in this way; it is easy to remember that they are a component piece of a communicating system, much like us.

So I ask that when creating work that lives online, to consider that the internet is as much environment as backyard, or field, or the earth at large. And although space once felt infinite there too, there are no truly wild places; they are all home to something, and more often than not, someone.

It is well within our collective power to recognize systems-health of the spaces we have built. To be conservationists, and ecologists, and responsible gardeners- making bots and other interfaces that function sustainably. After all, we live here too.

--

--