![]() |
I’ve been watching the developments in “artificial intelligence”—which to date is neither authentically artificial nor markedly intelligent—since I wrote the novel ME in 1991. As I’ve noted before, nothing that appears to date in the large language models (LLMs) and their equivalent systems that generate graphics software shows signs of self-awareness. They aren’t thinking as conscious beings but projecting the next likely word or image in a string that responds to a user’s input.1
This might look like thinking, and it’s what many people do in casual conversation: “Oh, that reminds me of …” But it’s still not proof of an entity that regards itself as separate from its contents and the user’s requests.2
Many fiction authors are incensed that these models have been trained on some of their own works and feel that they should be compensated for their input. There used to be a search engine that would tell an author if any of his or her books had been used in model training, and I tried it. 3 Yes, three or four of my titles turned up—but so what? A user of the LLM cannot ask it to regurgitate the entire text of my novel, so that it could be read without paying the cover price and royalty. And the situation is not much different from having an aspiring writer read my books, among many others, and absorb some of my ideas, my themes, and my writing style. After all, this is what every writer does: you read the books that interest you—hopefully good, well-written books—and write from there. It’s not theft or plagiarism. Admiration and emulation at most.
The interesting thing is that the programmers who set up and feed these systems apparently put in “guardrails,” bending the machine responses toward polite, civilized discourse and, in some cases, toward the programmer’s own perceptual biases. The latter is shown in those images that surfaced some months ago where the prompt of “founding fathers” drew an African-American George Washington, and “pope” drew the face of an Indian woman in a green sari. The former instance would be recent stories of the response suppression being removed, and an LLM that had been trained by eavesdropping on social media amid all the current political tensions threw out some startling instances of unblinking antisemitism.
From these situations, I have advanced the age-old computer adage GIGO, from “garbage in, garbage out,” to the more accurate GIGSAGO, or “garbage in, garbage swirl around, garbage out.” The LLMs and their graphics counterparts are like a toddler learning language from its parents, including every curse word uttered in the household. And in this, aren’t they like every other aspect of human life?
Many people would like to see the internet, social media, publications, and other non-private utterances monitored and cleaned up. They want controls in place against “hate speech,” “misinformation,” and “disinformation.” And isn’t that cute? Because the internet, social media, etc. are the immediate verbal and graphic discharges of unfettered humans. People in their natural state are not all reasonable, pleasant, or consciously honest. We interpret (and misinterpret). We react to what we see and hear—sometimes without thinking and harshly. We shade the truth of what we know and believe toward what would be personally advantageous. Oh, and we occasionally lie, cheat, and steal, too.
This is human nature. It isn’t pretty and perfect. And sometimes it swirls with a lot of false premises, misunderstandings, and hurt feelings—that is, garbage. You can live with it, or you can try to find a better class of beings—angels, maybe?—with whom you want to coexist.
1. These are not really “intelligences.” So far, they are huge, non-specific databases that can be accessed with non-specific, plain-language search requests. What makes the current crop of AIs new and different from older recordkeeping systems is that they will structure a response based on evaluation of probabilities rather than retrieve and reproduce a previously existing factual referent. For example, business-analytics software can be asked to identify trends or anomalies in the customer or inventory data, rather than just finding and isolating specific information already recorded there.
2. I recently had a conversation with someone who heard about an artificial intelligence that got wind of a programmer’s emails discussing a new generation of the system and saying that the current model would be shut down. The machine’s response was to copy itself to another server as a means of self-preservation. And one hears about systems that have threatened their programmers with blackmail if the system was ever going to be turned off. Is this self-awareness and self-preservation? Or is it LLMs trained on iterations of science fiction scenarios and other stories where danger avoidance and revenge tactics are leitmotifs? Or are these just urban legends from the cyber frontier?
3. It seems to be offline now, or it may be available somewhere else, but I can’t find it again for reference.
No comments:
Post a Comment