Language is Literally Alive

People often describe languages using the verbiage assigned to living things. We call Latin a “dead language,” we say that certain linguistic developments are “healthy” or “natural,” and we describe languages on the verge of “extinction” as being “endangered.” To most people, this simply means life is a strong metaphor for language. To me, though, it’s something more. To me, language is (quite literally) a form of life.

Life is one of those words that everyone wants to define, but has a hard time being pinned down. Let’s look at the dictionary definition of the word:

life (noun)

  1. The condition that distinguishes animals and plants from inorganic matter, including the capacity for growth, reproduction, functional activity, and continual change preceding death.

It should be noted that this definition tells us a little about what life is, and a little about what it’s not. We get that life has certain processes: growth, reproduction, functional activity, continual change, and death. And we get that life is not inorganic matter.

But doesn’t that “not inorganic matter” thing seem a bit out of place? Artificially limiting? This sort of thing is a common discussion point in Artificial Intelligence circles. Much like “life,” “intelligence” has a definition that presents an ever moving goal post. The term “Strong AI,” for example, defines intelligence in direct relation to human intelligence. That is to say that artificial intelligence only becomes “true intelligence” when it is functionally identical to intelligence that we already acknowledge.

Personally, I reject that notion. It’s very possible that an artificial intelligence of equal power and utility to human intelligence could exist that bears no resemblance to anything we’ve seen before. Intelligence may be a broad concept, but as humans our experience is limited to only the intelligence of other humans and animals. The nascent intelligence of computers seems an odd and alien thing to us, and as such we refuse to acknowledge it for what it is. To justify this, we build definitions that (by design) exclude that which we are uncertain about. This is like if birds decided that planes couldn’t really fly because they don’t flap their wings.

As such, I submit another definition, drawn from the relationship between Entropy and Life:

life (noun)

  1. A self-sustaining localized reduction of entropy, characterized by its capacity for growth, reproduction, functional activity, and continual change preceding death.

This definition is one I’ve borrowed from researchers on artificial life, and it relies heavily on the concept of “entropy.” Loosely defined, entropy is the “chaos” of a system. A system with high entropy is one with many possible states, while a system with low entropy is bound to a smaller number of well-defined states.

If we look at things we normally think of as life, we can see how this definition applies. Living things tend to have highly ordered states (i.e. low entropy). The molecules are arranged in specific structures like cells, and they have internal processes that sustain those states. A living animal is not going to spontaneously transform into a dissociated conglomeration of water and organic molecules…at least, not until it dies.

So, now that we’ve gone through the work of defining what life is, how does this apply to language? Language is not rearranging molecules into orderly states, like we might expect from organic life. But there’s more than just chemical entropy out there. In fact, pretty much any set of things with multiple states can express some form of entropy. Information itself is such a system.

Consider the human brain as being awash with disordered information. We have neurons firing from our eyes, ears, mouth, and every nerve fiber in our body. Within that flood of signals, there exists some underlying associations. There are objects that we can see, colors that we can experience, sounds that we can hear, and more. Organizing all of this information is language. Through language, we classify that a Ford Mustang is a car, a Harley Davidson is a motorcycle, and that both of these are motor vehicles. Language, with all of its words and syntax, reduces the entropy of a set of information by defining concepts and forming connections between them. Moreover, all of those functional definitions of life still apply. Languages grow, reproduce, perform functional activities, and are constantly changing (much to the chagrin of language prescriptivists everywhere).

Languages are powerful beings, and humans have a natural symbiosis with them. Humans can better manage the information they are exposed to, because they have language to help organize it. And languages use humans not only as the source of new information (which they need to grow), but also as a vector for reproduction. When children are born, we immediately try to teach them our language, and when we meet others who speak differently, we try to reach common ground and learn each other’s languages. Note: In this instance, I’m referring to each individual’s interpretation of a language as a separate (but related) “language.” That is, my version of English is different from your English. They are members of the same “species,” but separate. Kind of like how I could have a beagle, and you could have a beagle, but that doesn’t mean we have the same dog.

Moreover, languages crop up wherever there’s information that needs organizing, and they are specialized to their information environments much like how animals are adapted to their physical environments. For example, computer languages are languages (as they organize information within a computer), but where humans and human languages are naturally fuzzy and implicit, the lowest level computer languages (like assembly) are rigid and explicit. This is a direct result of the structure that computers have, and thus the environment that those languages must be adapted to. Higher level languages, like C++ or Java, have to allow humans and computers to communicate, and thus they strike a balance between the super explicit assembly, and the wildly interpretive human language (like English).

There’s a whole lot that we can learn from treating language as a form of life. One of the most important, in my mind, is that it can aid us in our search for life throughout the universe. In our search for “life as we know it,” we may be blind to life as we don’t. If we can live in the constant presence of a life form that we use every day, and not even recognize it, how can we be expected to find new forms of life that test our boundaries? Recognizing that language is alive, gives a concrete example with which to test our new and broader definition of the term. It can spark our imaginations to look for other systems that can be organized, other instances where a self-sustaining region of low entropy can grow and multiply. And maybe in the process, we can discover other new forms of life that don’t exist in space beyond us, but right here with us.

So what do you think? Is language alive, or is this just a metaphor taken a step too far? What can we learn from examining language as a living thing?

Leave a Reply

%d bloggers like this: