Art, craft, game-theoretic cognition and machine learning

 Originally published on Language and Philosophy, July 12, 2022

Here in Istanbul, you cannot but admire the Turkish carpet and the mosques of Sinan, the carpet a wonder of intricacy, the more complex and detailed the more wondrous, and Sinan’s grand mosques a wonder of simplicity, purity and restraint even when scaled to the most expansive heights. If you are an idle wonderer with time to think about questions almost too obvious to ask, you might puzzle over why are there no simple carpets when the simplicity of the mosques is so overwhelmingly effective. Why can’t carpet makers avail themselves of modest simplicity in their craft, when purity and humility can reach so deeply into the human heart and mind?

The goals of craft are not the goals of art, no doubt. But what’s the difference? Or better, why such a difference? A good libertarian, and a good Darwinian — and they might as well be the same — would ask first where the market incentives lie. The answer will go a long way to explain the traditional crafts of intricacy. Maybe not so far with art.

First, let’s look at the differences. Craft can be learnt by almost anyone. That this is so couldn’t be more obvious from the tradition of handing the craft from generation to generation. Not so with art. There are a few great arts families, the Bachs and Holbeins and Mendelsohns, and certainly Beethoven and Mozart grew up in musical families, but how many of us listen avidly to Ludwig senior’s compositions (I’ve never heard of a single one) or of papa Mozart or even Bach’s sons, famous as they were in their day. I’d guess that Holbein’s brother, had he not died so young, would have surpassed Hans, but this is the exception that proves the rule. That is, Ambrosius proves by demonstration that extraordinary artistic talent can be shared within a family, so why not the Bachs, the Mozarts, the Beethovens, and whatever happened to Vincent’s brother Theo, the art dealer and all the other artistically talentless siblings, parents and children?

So maybe the incentive is to blame. The traditional crafts provide a reliable source of income, the arts don’t, so children or siblings of artists might choose an alternative route. But this unreliability is only relevant to unsuccessful artists, so the incentive argument begs the question. Successful artists can be far wealthier than any craftsperson. The question has merely been restated: why is a career in the arts riskier than the crafts so safe; why are the arts not reliable sources of income? And now also a mystery, why does anyone pursue art if the financial incentive lacks reliability?

Stay for a moment with the crafts, handed from generation to generation, a traditional income for the whole family. If the craft is a reliable source and so the incentive then is income — the reliability draws the income-seeking — then the goal of craft is income. This too seems obvious just looking at the intricacy of such crafts, since intricacy is labor made visible. The more labor devoted to the artifact, the more evidence of it in the product, the more value. What is the patron looking for and paying for in a traditional craft, after all? The wealthy patron wants an artifact that demonstrates visibly to the patron’s friends or clients or guests that this object bought a high price, so he must be wealthy with money to burn. There’s your incentive and there’s the explanation for all that intricacy. The more intricate, the more proof of labor, the more value in the artifact, the more evidence of the wealth of the patron. Simplicity in such a context, signals absence of labor, lack of value. There’s no room for the virtues of simplicity. It plays a role only if there is a down market of cheap goods for fashion followers who haven’t the resources to buy an expensive carpet.

So the financial incentive yields an artifact that confines itself to intricacy for the purpose of appealing to the patron’s pocket, not to the patron’s emotions, not to his ideas or his politics, not to his morality, not to his mind. Just his pocket. Pay more, get more labor in the artifact in the form of greater intricacy. The craft and its artifact is a relationship between the laborer’s skill and the material she or he works and the pay it gets, nothing more. The only modulations are between more labor (intricacy) and more material — more items or larger items.

What about the arts? An artist must also master the skill of manipulating a material, but has to manipulate as well the emotions or ideas, or politics or moral sense of the audience. The art is a relation between the artist, the material and the mind of the audience, and really the primary material is the audience’s mind. To manipulate the mind, the artist will often hide the craft (the labor) to achieve a seamless illusion of reality, not display the artist’s intent to manipulate, which would undermine the manipulation. (That’s why Brecht’s breaking the fourth wall was radically innovative — the point of drama is to create the illusion of reality, not draw attention to the author.)

The art will also have content, not just intricacy. The content may refer beyond the material. It may consider the context surrounding the art — in drama the context might be the society and its ills, say, or for architecture the entire cityscape. The artifact must not be just an elaborate structure. It must have a place in the world, to the audience’s mind. It can even create a world. Art is an interactive game between artist and other minds and anything that could be contained in that mind. It may contain many worlds, unbounded in number and form.

What about the incentive?

Well, yes, what about the incentive. Like the best of the sciences and maybe all of sports, the primary incentive is not financial, it’s some other kind of reward. Recognition and approval, esteem and pride must be in there, and competition among peers, but surely, above all, the love of engagement with the audience through the artform. Money? That might be a necessary condition, not sufficient for an artist’s choice.

What is it about this game theoretic activity — the manipulation of other minds — that draws or cultivates such extraordinary abilities? It’s not dull labor for financial gain. That’s a one-dimensional activity. The artist might even challenge and insult the audience. Not a craftsperson. Artistry is more social and more fun.

So how is all this related to machine learning and Grice?

*****

I was listening to a couple of Sean Carroll podcasts a few weeks ago, one interviewing Tai-Danae Bradley, the other with Gary Marcus, both about machine learning. The Yoneda Lemma is the connection. The Yoneda Lemma, according to Tai-Danae Bradley, tells us that the meaning of a word can be completely derived from all its contexts.

https://www.youtube.com/watch?v=OynLbSzLS9s&embeds_referring_euri=https%3A%2F%2Flanguageandphilosophy.wordpress.com%2F&source_ve_path=MjM4NTE&feature=emb_title

I can think of at least four challenges to this broad behaviorist, reductionist assertion. For one, a neologism may be introduced by its coiner with clear meaning, but its full contexts may be underdetermined. The coiner will understand it in detail, but a machine learner won’t have access to that information. To assert that the meaning of the word is underdetermined would be a circular argument on the one hand, and would be to ignore that the coiner may have had a clear and distinct concept of its meaning even though others in the speech community might not, and that those others in the language community, reading this coinage, will likely be able to guess its meaning by a kind of process of elimination — looking at first which possible familiar words is this new word replacing. That is, we can learn not just by the contexts of the word, but using contexts analogous to the context at hand, and inserting an expected word to derive the expected meaning, then reverse engineer the meaning of the new word.

A second challenge is a sorites-type puzzle. Contexts may be inconsistent. At what point do we judge that, say, “meme” refers to ideas that circulate among humans (following its coiner, Dawkins) or an online gif, often including either a cat or movie clip, used in place of a linguistically articulated judgment? This puzzle usually has an easy solution. The word has become ambiguous with two historically related but now very distinct meanings. The slippery slope here isn’t disastrous either. It’s no skin off my tooth to grant that the Australian Prime Minister’s idealect — his personal dialectician of English — has a “suppository of wisdom”, or that Rick Perry’s has “lavatories of innovation”. I’m sure I’ve made equally silly maladropisms without even confusing the reader. More likely the reader is amused or gloating depending on how sophisticated or how much of a troll the reader is, but not confused. So let a thousand flowers boom.

A third raises an old behavioral, reductionist quandary. It comes from Willard van Orman Quine. Consider any two concepts that denote the same set of individuals. His example was cordates and renates, animals with a heart and those with a kidney. Now all mammals have hearts and also have kidneys. So the expression “mammalian cordates” and “mammalian renates” designate the same set, but they clearly don’t have the same meaning and if it’s possible that every known, actual use (as distinct from every possible use) of one could be exchanged for the other, then like the neologism, the contexts won’t access the difference of meaning in the mind of their users.

All these cases assume that we know the meanings of the words somehow beyond the contexts in which they are used. If we humans knew the meaning of words only by their contexts, then we would be such big data learners, and the words “renate” and “cordate”, if the actual contexts never distinguished them, would have to be considered synonymous. The reason we don’t is simply because we avail ourselves of the dictionary. Of course, the dictionary is a context too, so if we include the dictionary in machine learning then there will be strong evidence that the Yoneda Lemma is true. But if the machine learner avails itself of the dictionary, then who needs a Yoneda Lemma or big data, just consult the dictionary — insert the meaning of words as brutal input, the machine now knows, but no learning has happened. And no prediction of language shift either, since the dictionary is just a reflection of historical use, not a determiner of it. 

This is really all to say that what’s in the mind is not necessarily accounted for by the actions that proceed from the possessor of the mind. Knowing and doing are not identical, so there should be circumstances in which the two could be distinguished so that gathering the one will not account entirely for the other. One might imagine having taken an action without actually having done it. It’s delusional, but it happens, probably more often than we’d like to admit. Or the reality we believe we live in might not have an unambiguous relation to the actual physical world we live in. We might be certain of what a zipper is, without realizing that we don’t know how it works, or think we know exactly what some deity is but when pushed can’t say exactly any of its properties. Obversely, as in the Quine cases, one’s actions might be ambiguous evidence for one’s decisions or ideas.

Gary Marcus in his interview addresses similar problems for machine learning. Not everything humans know can be known by extensional learning — big behavioral data. (I think we should give it the name BBG because it is a very specific kind of limited data, and I’m going to suggest a different kind of data for learning.) Gathering behaviors may account for actual actions, but not possible actions, and the possible actions reveal the difference between the mere denotation in the actual world and the meaning of the word.

Machine learning is fully adequate to extensional use of language — the actual uses in the real world — but not for intensions, which include all possible uses beyond the actual ones, where machine learning runs straight up against the inductive fallacy. (This is a pretty thorough historical treatment from Frege to Montague and Quine, this is a very brief summary of the background issues. Here’s the Marcus interview)

So much for the familiar challenges. I want to look at a fourth problem for machine learning, a game theoretic problem that seems to be missing from the AI discussion. The meaning of an expression in use — in conversation, which is the point of language and without which symbolic language would never have evolved — is a game theoretic equilibrium, not restricted to the value (meaning or reference) of the word defined in the code/language.

Think of the difference between coding for a simple input-output device versus an interactive interface in which the algorithm must “guess” at the intentions of the user. This is analogous to what I was saying about traditional decorative “intricacy” crafts, which are a relation between the craftsperson and the material (for the sake of exhibiting labor in the product for the rich patron) via the tools of the technology that manipulate the material, as compared with arts where the relation is between the craftsperson and the audience via those tools but also via Theory of Mind to manipulate the mind of the audience — the user. So for machine learning, a sarcastic use of, say, “brilliant” (“You got an A: brilliant!” “You dumped hot coffee in my lap. Brilliant!”) will be interpreted as a homophony or auto-antonymy — the sound sequence “brilliant” having two meanings, in effect two words with identical sounds like “fast” (run fast, an intermittent fast) or “left” (she left home, she took a left turn) or auto-antonyms like “cleave” (cleave to a friend, cleave apart) or “dust” (dust a field with glyphosate, dust the table with a rag).

But for English speakers, sarcasm is not homophony or auto-antonymy. It’s a self-same word used differently depending on the mutual knowledge of the conversation members: you got an A, brilliant; you spilled hot coffee in my lap, brilliant — as code, the symbol still means “smart”. Proof: replace “brilliant” with a synonym like “smart” and the two meanings are unchanged — not so with “cleave” or “fast (run quickly, intermittent quickly??)”.  The value in the conversational game is an equilibrium based on mutual information (speaker and addressee both know spilling coffee is not brilliant according to its conventional meaning or use in English). 

So this is yet another failure of extensional inductive learning — a particularly narrow one-sided materialist reductionism. In other words, a truly successful machine learner, beyond merely gathering or analysing data, would have to *experiment* with synonyms of “brilliant” and “fast” to figure out the type of use difference and then *speculate* on why there’s such a difference — *conjecturing* on what’s actually represented in the mind of the speakers. It’d have to engage in the speculative process of scientific theory creation, not just calculating averages. Prediction, experiment, followed by error correction, but not just Popper-style or if you prefer, Friston-style, but a game-theoretic prediction, Darwin- or Dawkins-style (as in sexual selection or the extended phenotype where the products of selection themselves participate in the selection), answering the question not “what is this symbol’s fixed reference in use?” but “what are the rules of this game?” You see the difference.

Sarcasm is only one of many consequences of this Gricean game-theoretic equilibrium. Grice mentions “possibly” and its (defeasible) implicature of “not actual”. So in “It’s possible to construct a car that runs 300mph. In fact, Fiat made one but couldn’t market it” the “in fact” phrase is used to remove the implicature that “it’s possible, but merely possible, not yet actual”, an implicature which would stand were it not for the “in fact” phrase. Similarly, “it might be raining” implicates that it might also not be raining. These implicatures are explained by Grice’s game theoretical rules. It also has broad evolutionary implications for human gullibility, including adherence to religious beliefs (yet another discussion), cases in which humans fail when chimps succeed, and further supports Marcus’ point in the podcast.


Comments

Popular posts from this blog

An addendum on Hanson’s grabby aliens

my Proudest Moment (and the problem with Dawkins)

The limits of imagination, scifi, art and UFOs -- or: the intrinsic mediocrity of art