“Spanning borders between different worlds, histories, futures, and foundational models, Machine Decision is Not Final is not only a timely reappraisal of the stakes of AI development, but a tool for constructing more global imaginaries for the future of AI.” This Fall, Urbanomic will release Machine Decision is Not Final, an interdisciplinary and cross-cultural collection edited by Benjamin Bratton, Anna Greenspan and Bogna Konior. Seeking a fresh perspective on what artificial intelligence is today and what it might become, Chaosmotics, in collaboration with Urbanomic, is publishing a series of preview excerpts throughout the week.
_____________________
Machine Envy
Wang Xin
Since 2018, the social media platform Weibo has seen a curious rise in bots, by which I don’t mean automated zombie accounts for hire or coordinated troll armies, but rather, humans simulating bots by creating accounts with ‘bot-sounding’ handles. […] The ‘Lu Xun bot’ was named after one of modern China’s most formidable and incandescent writers (1881-1936), whose trenchant critique of the ills of imperialism and Confucian conservatism continues to resound in national textbooks and public discourse alike. His cultural and political ideas have been widely recited and, more crucially, promoted by the authorities as representative of their central values. One may argue that his work constitutes an essential data-set on which generations of modern Chinese people have been trained, making Lu Xun’s words uniquely potent and thorny when deployed to critique the social ills of today. […] One radical implication of the ‘Lu Xun bot’ is the future potential of bringing cultural luminaries back to ‘life’ using an advanced AI system. Although legendary artists such as the late Hibari Misora have been revived in stunning performances enabled by VOCALOID:AI, the prospect of AI enabled cultural criticism still feels distant, with its necropolitics, its ethics, and its ontology remaining rather murky. On the other hand, ‘Lu Xun bot’ feigns a sense of machine-induced objectivity and randomness, hence escaping the liabilities and political consequences that would be activated by a concession of human agency. Self-deprecating yet defiant, this satirical gesture belies a sense of resignation that speaks volumes about the actual state of human agency, which seeks anonymity and shelter through nonhuman camouflage in a mass surveillance state. As Chinese netizens make creative censorship circumvention a national pastime—one may argue this is one of the cutting edges of cultural production—the swift and constant erasure of these strategies, data points, and new semantics make scaled, deep learning unfeasible, let alone history writing of any practical continuity or validity. One wonders not only how this zeitgeist might be captured or modelled, but indeed how it might also be transformed, as people intuitively adapt to coded speech in Chinese cyberspace. What is profoundly odd and ironic here is a palpable sense of machine envy, where advanced technology doesn’t necessarily embody the all too familiar tropes of servitude or existential threat, but rather, presents a viable, aspirational model of how to be. Artificiality not only feels more desirable but also more tangible than the real.
The Dark Forest Theory of Intelligence
Bogna Konior
In Remembrance of the Earth’s Past, Liu Cixin’s first contact science fiction trilogy, extraterrestrials discover with surprise that for humans, ‘think’ and ‘say’ are not synonyms. In concealing information, humans have an unfair advantage because they can manipulate the expression of their thoughts: ‘it is precisely the expression of deformed thoughts that makes the exchange of information in human society…so much like a twisted maze’.[1] ‘Human-level’ intelligence is then the ability to control the exchange of complex communication, especially by concealment. On the contrary, alien intelligence is described as radically explicit—the aliens communicate unreflexively and transparently, as if they were mere display technologies: ‘[they] do not have organs of communication, [their] brains display [their] thoughts to the outside world; thought and memories transparent like a book placed out in public, or a film projected in a plaza…totally exposed.’[2] Such exhibitionism of one’s reasoning processes is how the famous American thought experiments about AI have conceptualised computer intelligence: having it is showing it. From Alan Turing’s ‘imitation game’ to John Searle’s ‘Chinese Room,’ computer intelligence has been about demonstrating linguistic ‘ability’. Both Turing and Searle speculate that a computer might fluently converse with humans, but in neither of these thought experiments are computers presumed to lie. Computer communications are judged at face value as simply the best that a computer can do. Even though an intelligent computer should ‘be able to alter its own instructions’[3], it is not imagined as acting deceitfully or manipulatively. Throughout the history of AI thought experiments, it has been unusual to assume that an intelligent computer might appear unintelligent for its own purposes. Just like in the case of Liu’s aliens, computer intelligence is imagined as transparent—if it is there, it should communicate itself unreflexively, because a computer cannot decide to withhold its own abilities. Yet, Liu’s description of intelligence as the skillful use of communication as ‘trick, camouflage, deception’[4] would also suggest that not all approaches to intelligence have to be exhibitionist. If intelligence is defined as deception, trickery, and camouflage, having it is hiding it […]. Could we then move from asking ‘how well could an intelligent computer communicate with humans’ to ‘why would an intelligent computer communicate its intelligence at all’?
________________________________________________________________________