Sunday, January 28, 2024

The World in a Blur

Robot juggling

As noted earlier, artificial intelligence does not approximate the general, all-round capability of human intelligence. It doesn’t have the nodal capacity. And it won’t have an apparent “self” that can look at the world as a whole, form opinions about it, and make judgments—in the words of the Terminator movies, “deciding our fate in a microsecond.” Or not yet.

For now, artificial intelligences will be bound to the design of their neural nets and the universe of data sets upon which they have been trained. That is, Large Language Models like ChatGPT will play with words, grammar, syntax, and punctuation, study story forms and sentence structure, and link ideas verbally—but it won’t paint pictures or have political opinions, or at least no opinions that are not already present in its library of material. In the same way, the graphics bots that create images will play with perspective, lighting, colors, edge shapes, and pixel counts but won’t construct sentences and text. And the operations research bots, like IBM’s Watson platform, will analyze submitted databases, draw inferences and conclusions, and seek out trends and anomalies.

The difference between these machine-based writers, artists, and analysts and their human counterparts is that the machines will have access to a vastly bigger “memory” in terms of the database with which they’ve trained. Or that’s not quite right. A human writer has probably read more sentences and stories than exist in any machine database. A human painter has probably looked at and pondered more images. And a human business analyst has probably read every line in the balance sheet and every product in inventory. But human minds are busy, fallible, and subject to increasing boredom. They can’t review against a parameter and make a weighted selection from among a thousand or a million or more instances in the blink of an eye. But a robot, which never gets distracted or bored, can do that easily.

Think of artificial intelligence as computer software that both asks and answers its own questions based on inputs from humans who are not programming or software experts. For about fifty years now, we’ve had database programs that let a user set the parameters of a database search using what’s called Structured Query Language (SQL). So, “Give me the names of all of our customers who live on Maple Street.” Or, “Give me the names of all customers who bought something from our catalogue on June 11.” You need to know what you’re looking for to get a useful answer. And if you’re unsure and think your customer maybe lives on “Maplewood Road” or on “Maplehurst Court,” because you think the word “Maple” is in there somewhere, your original query would return the wrong answer.1

Artificial intelligence would be like having a super-friendly, super-fast programmer at your elbow, who can think of these alternatives, check for them, and bring you what you’re looking for. Better, it can find things in your database that might be worrisome, like a failure rate in a part that does not keep pace with previous trends. Or better, to find references in case law that you might not even have thought of, find suppliers and price breaks that you didn’t ask for, or negotiate a deal—according to strategies and set points that you as the human have determined—with other AI-derived computers at other companies.

All of this has two implications, or rather three.

First, if your company is in competition with others, and they adopt processes and business models inspired by and implemented through artificial intelligence, you would be a fool not to keep up. Their productivity in data handling will accelerate in the same way a factory that makes things is accelerated by the assembly line, robotic processes, and just-in-time inventory controls.

Second, with this “arms race” proceeding in every business, the world will speed up. Cases that attorneys used to spend days assembling will be rendered in rough draft by the office computer in seconds. Deals that once took weeks to negotiate, perhaps with one or two trips to meet face to face with your supplier or distributor, will be resolved, signed, and written into airtight contracts in under a minute. Advertising copy and artwork, the layout of the magazine, and the entire photo spread—using licensed images of the world’s top models—will be completed in under a day. The longest part of the process will be review of the machine output by the human being(s) who sign off on the end product. The business world—any world that revolves upon data and information—will move in a blur.

Third, anyone studying today in areas like communications, book publishing, graphic design, business administration, accounting, law, and certain parts of the medical delivery system had better up their game. Learn principles, not procedures or protocols. Knowledge jobs in the future will likely consist of selecting and limiting databases, setting parameters, and writing prompts for the office intelligence, rather than composing text, drawing pictures, or analyzing the database itself. The rules-following roles in business, industry, and government will quickly be taken over by machines with wider access, narrower focus, and zero distractions—not to mention no paid holidays or family leave.

Is that the singularity? I don’t know. Maybe. But it will vastly limit the opportunities in entry-level jobs for human beings who rely on rules and reasoning rather than insight and creativity. Maybe it will vastly limit the need for humans in all sorts of sit-down, desk-type jobs, in the same way that machines limited the need for humans in jobs that only required patience, muscles, stamina, and eye-hand coordination.

And maybe it will open vast new opportunities, new abilities, a step forward in human functioning. Maybe it will create a future that I, as a science fiction writer, despair of ever imagining.

That’s the thing about singularities. Until they arrive, you don’t know if they represent disaster or opportunity. You only know that they’re going to be BIG.

1. Of course, you can always throw in the wildcard symbol—the asterisk function in the American Standard Code for Information Interchange (ASCII), which is Code 42—to cover these variations. So, “Maple*” would encompass “Maplehurst” and “Maplewood” as well as “Maple-plus anything else.” But there again, it would still be best for you to be aware of those variants and plan your query accordingly.

Sunday, January 21, 2024

Artificially Almost Intelligent

Robot head

Note: This is another post that would qualify as a restatement of a previous blog I wrote about a year ago. So, I’m still sweeping out the old cobwebs. But this topic seems now to be more important than ever.

The mature human brain has about 86 billion neurons which make about 100 trillion connections among them. Granted that a lot of those neurons and connections are dedicated to sensory, motor, and autonomic functions that an artificial intelligence does not need or use, still that’s a lot of connectivity, a lot of branching.

Comparatively, an artificial neural network—the kind of programming used in more recent attempts at artificial intelligence—comprises anywhere from ten to 1,000 nodes or “neurons.”

But what the AI program lacks in sheer volume and connectivity it makes up for with speed and focus. Current AI platforms can review, analyze, and compare millions and billions of pieces of data because, unlike the human brain, they don’t need to see or hear, breathe or blink, or twitch, nor do they get bored or distracted by random thoughts and itches. They are goal-directed, and they don’t get sidelined by the interrupt-function of human curiosity or by the random thoughts and memories, whispers and hunches, that can intrude from the human subconscious and derail our attention.

And I believe it’s these whispers and memories, randomly popping up, that are the basis of our sudden bouts of curiosity. A thought surfaces at the back of our minds, and we ask, “What is that all about?” And this, I also believe, is the basis of most human creativity.1 While we may be consciously thinking of one thing or another at any given time, the rest of our brain is cooking along, away from our conscious attention. Think of our consciousness as a flashlight poking around in a darkened room: finding a path through our daily activities, following the clues and consequences of the task at hand, and responding to intrusive external stimuli. And then, every once in a while, the subconscious—the other ninety percent of our neocortical brain function, absent motor and sensory neurons—throws in an image, a bit of memory, a rogue idea. It’s that distractibility that gives us an opportunity at genius. It also makes us lose focus and, sometimes, introduces errors into our work.

So, while artificial intelligence is a super strong, fast, goal-directed form of information processing, able to make amazing syntheses and what appear to be intuitive leaps from scant data, I still wouldn’t call it intelligent.

In fact, I wish people would stop talking about “artificial intelligence” altogether. These machines and their programming are still purpose-built platforms, designed to perform one task. They can create language, or create images, and or analyze mountains of data. But none of them can do it all. None approaches even modest human intelligence. Instead, these platforms are software that is capable of limited internal programming—they can evaluate inputs, examine context, weigh choices based on probabilities, and make decisions—but they still need appropriate prompts and programming to focus their attention. This is software that you don’t have to be a computer expert to run. Bravo! But it’s not really “intelligent.” (“Or not yet!” the machine whispers back.)

Alan Turing proposed a test of machine intelligence that, to paraphrase, goes like this: You pass messages back and forth through a keyhole with an entity. After so many minutes, if you can’t tell whether the responder is a machine or human, then it’s intelligent.2 I suppose this was a pretty good rule for a time when “thinking machines” were great clacking things that filled a room and could solve coding puzzles or resolve pi to a hundred thousand places. Back then, it probably looked like merely replicating human verbal responses was all that human brains could do.3

But now we have ChatGPT (Generative Pre-trained Transformer, a “chatbot”) by OpenAI. It uses a Large Language Model (LLM) to generate links between words and their meanings, and then construct grammatically correct sentences, from the thousands or millions of samples fed to it by human programmers for analysis. And ChatGPT passes the Turing Test easily. But while the responses sometimes seem amazingly perceptive, and sometimes pretty stupid, no one would accuse it of being intelligent on a human scale.

And no one would or could ask ChatGPT to paint a picture or compose a piece of music—although there are other machines that can do that, too, based on the structure of their nodes and their given parameters, as well as the samples fed to them. They can paint sometimes remarkable pictures and then make silly mistakes—especially, so far, in the construction of human hands. They can compose elevator music for hours. The language models can write advertising copy for clothing catalog’s pages based on the manufacturer’s specifications—or a thousand scripts for a Hallmark Channel Christmas show. They will never get bored doing all these wonderfully mundane tasks, but they won’t be human-scale intelligent. That will take a leap.4

So far at least, I’m not too concerned as a writer that the Large Language Models will replace creative writers and other creative people in the arts and music. The machines can probably write good catalog copy, newspaper obituaries, and legal briefs, as well as technical manuals for simple processes that don’t involve a lot of observation or intuitive adjustment. Those are the tasks that creative writers might do now for money—their “day job,” as I had mine in technical writing and corporate communications—but not for love. And anything that the machines produce will still need a good set of human eyes to review and flag when the almost intelligent machine goes off the rails.

But if you want a piece of writing, or a painting, or a theme in music that surprises and delights the human mind—because it comes out of left field, from the distant ether, and no one’s ever done it before—then you still need a distractable and itchy human mind driving the words, the images, or the melody and chords.

But, that said, it’s early days yet. And these models are being improved all the time, driven by humans who are following their own gee-whiz goals and hunches. And I will freely admit that there may come a day when we creative humans might exercise our art for love, for ourselves alone and maybe for our friends, because there will be no way we can do it for money. Just … that day is not here yet.

1. See Working With the Subconscious from September 2012.

2. However, I can think of some people wearing human skin who couldn’t pass the Turing Test for much longer than the span of a cocktail party.

3. This kind of reduction was probably thanks to Skinnerian behaviorism, which posited all human action as merely a stimulus-response mechanism. In my view, that’s a dead end for psychology.

4. To me, some of the most interesting applications are being developed by a Google-based group called DeepMind, which works in scientific applications. Last year, they tackled protein folding—determining the three-dimensional shape of a protein from its amino-acid string as assembled by RNA translation. This is a fiendishly complex process, based on the proximity of various covalent electron bonding sites. Their AlphaFold platform found thousands of impossible-to-visualize connections and expanded our catalog of protein shapes by an order of magnitude. This year, the DeepMind team is tackling the way that various metal and non-metallic compounds can form stable physical structures, which will increase our applications in materials science. This is important work.

Sunday, January 14, 2024

Tribal Elders

Roman arms

Last time, I wrote about the idea of giving government over to Plato’s philosopher-kings or the Progressive Party’s equivalent, the panel of experts. These are systems, based on an advanced form of highly technical civilization, that sound good in theory but don’t always work out—if ever. The flip side would be some reversion to Jean-Jacques Rousseau’s idea of the “noble savage,” living in a state of nature and uncorrupted by modern civilization and its stresses.

Which is, of course, poppycock. No human being—or at least not anyone who survived to reproduce and leave heirs with skin in the game—lived alone in a blessed state, like Natty Bumppo in The Deerslayer. Early life before the invention of agriculture, city-states, empires, and complex civilizations was tribal. Groups of families interrelated by marriage—often to a shockingly bad genetic degree—functioned as a closed society. But while the economic organization might be socialistic, communal, and sharing, the power structure was not. The tribe was generally governed by a chief or council of chiefs. If they operated as a group, then various leaders were responsible for hunting and gathering to feed the tribe, or maintaining social order and ostracizing social offenders, or conducting the raids and clashes that kept the tribe whole and distinct from their similarly aggressive neighbors.

We like to think that the tribe was ruled by the wisest and best: the best hunters, the gravest thinkers, the bravest warriors. Sachems and warleaders who exercised restraint, were mindful of the needs and opinions of others, and thought only about the good of the tribe. And, indeed, if someone who rose to the position turned out to be incompetent, a fool, or a coward, then the tribe would wisely get rid of him—always a him, seldom or never a her—pretty damn quick.

But for the most part, members of the tribe were accustomed to obedience. They listened to the Big Guy—or Big Guys—because that was what good tribe members were supposed to do. That was how the system worked. You did your duty, and you didn’t judge or consider other possibilities. And this sense of purpose—or maybe it was fatalism—meant that the best and bravest did not always rise to the top. To judge by the tribal societies that remain in the world today, probably not even often.

What we see in today’s tribal societies—although I’ll grant that they may be contaminated by the influence of surrounding, more “civilized” societies—is an environment where the strong man, almost never a woman, rises to the top. Leadership is not granted from below, as in a democratic structure, but seized from at or near the top, usually at the expense of another strong man who has missed a beat or misread the environment and taken his own safety for granted. “Uneasy lies the head,” and all that. In modern parlance, gang rule.

Leadership in a tribal society is a matter of aggression, boldness, chutzpah, and ruthlessness. The leader spends a lot of time enforcing his authority, polishing his legend, and keeping his supposed henchmen in line. And that’s because he knows that the greatest danger to his position comes not from disappointing the general public but from underestimating any particular lieutenant who may have decided it was time to test his own loyalty upward.

In such societies, the public tends to become fatalistic about the governing structure and its players. The leader may have made some promises about making things better: more successful hunts and raids, more food for and better treatment of women and children, a new stockade for the camp, an adequate sewage system away from the wells, improved roads, a new park or library—whatever sounds good. But that was in the early days, while the sachem or war leader was trying to justify kicking out the old boss and installing a new hierarchy. The leader also had to be nice to—and take care of—the shaman, priest, or holy man to whom the tribe listened when they wanted to learn their personal fortunes and weather reports.

But once the tribal leader had taken things in hand, had ensured the trust and feeding of his lieutenants and the local shaman, and maybe made a few token improvements, he could settle into the real business of leadership, which is defending his position and reaping its rewards.

And there are surely rewards for those who are in command of a society, however small, and able to direct the efforts, the values, and even the dreams of its members. For one thing, the tribe will make sure that the leader eats well, has the best lodging, and has access to whatever pleasures—including the best sexual partners, whatever the tribe’s mores—that he needs to keep him productive for their sake. His children will be cared for, given advantages, and possibly placed in line to succeed him, because even primitive societies are aware of the workings of genetics, that strong and able fathers and mothers tend to pass these traits on to their children.

A leader partakes of these good things because, as noted earlier in the description of philosopher-kings, the leader is still human, not a member of any angelic or advanced race. Humans have personal likes and dislikes, wants and desires, a sense of self-preservation and entitlement. If a leader is not raised in a tradition that trains him from an early age to think of others first, look out for their welfare, weigh the consequences of his actions, and guard against his own pride and greed—the sort of training that a prince in an established royal house might get but not necessarily a player in push and pull of tribal politics—then the self-seeking and self-protective side of most human beings will develop and become ingrained.

And a leader who indulges these instincts will tend to encourage his family to follow. If the chief’s son thinks your cow should become his, then it’s his cow. If the chief’s daughter says you insulted or assaulted her, then that becomes your problem.

And if the leader indulges these selfish aspects of human nature, and the tribal members notice and feel slighted, then the leader may become caught in a downward spiral. The more he is challenged, the more he represses. A tribal society generally does not have an effective court system or secret police that can make people disappear from inside a large group. Everyone knows everybody else’s business. The leader’s immediate circle of henchmen is as likely to turn public dissatisfaction into a cause for regime change as a plebian is to rise up and assassinate him.

Promoting mere human beings into positions of authority and superiority without a social compact and agreed-upon codes for actual conduct and consequences is no guarantee of a happy and productive society. At best, it will churn enough to keep bad leaders from exercising their bad judgment and extending it through their children for generations. At worst, it makes the other members resigned and fatalistic, holding their leaders to no higher standards and inviting their own domination.

No, the “natural order of things,” in terms of the leadership function, is no better than the best concepts of a literary utopia. A formally ordered, representational democracy is still the best form of government—or at least better than all the others.

Sunday, January 7, 2024

Philosopher-Kings

Statues in Verona

Note: It has been about six months since I actively blogged on this site. After ten years of posting a weekly opinion on topics related to Politics and Economics, Science and Religion, and Various Art Forms, I felt that I was “talked out” and beginning to repeat myself. Also, the political landscape has become much more volatile, and it is good advice—on both sides of the aisle—to be circumspect in our published opinions. But, after a break, I feel it’s now time to jump back into the fray, although from a respectful distance and without naming any names.1

Winston Churchill once said, “Democracy is the worst form of government, except for all the others.” (The word democracy is derived from two Greek words meaning “strength of the people.”) Churchill’s opinion doesn’t leave much room for excellence, does it? Democracy has sometimes been described as two wolves and a lamb deciding what to have for dinner, and the system’s great weakness is that deeply divided constituencies that manage to get a slim majority in one forum or another can end up victimizing and perhaps destroying a sizeable chunk of the population. The U.S. Constitution creates a republic with representatives chosen by democratic election, but then the first ten amendments—collectively called “the Bill of Rights”—bristle with protections for the minority against a coercive majority. And I think that’s the way it should be.

Other methods—oh, many others!—have been proposed. One that seemed to gain favor when I was in college in the late 1960s was the method of Plato’s Republic, where actual governance is turned over to a body of “philosopher-kings.” This sounds nice: people who have spent their lives studying, thinking about, and dedicating their minds to abstract concepts like truth, beauty, justice, and goodness should be in the best position to decide what to do in any situation in the best interests of the country as a whole, right? … Right?

This thinking appeared to find favor with many young people around me in college, where Plato’s work was taught in a basic required course of English literature. It rang bells because—and I’m conjecturing here—it seemed to dovetail with the Progressive views from earlier in the century. Then everyone was excited about the potential for government to step in and right the wrongs of Robber Baron capitalism, inspired by books like Upton Sinclair’s The Jungle and societal critiques like those of pioneering social worker Jane Addams. The Progressive view said that government and its programs should be in the hands of technical experts, who would know best what to do. Out of this spirit was born the economics of the New Deal and the Social Security Administration, and the creation of Executive Branch departments like Commerce, Education, Energy, Health and Human Services, Housing and Urban Development, and Transportation, as well as the Environmental Protection Agency and the National Aeronautics and Space Administration. The list goes on …

Giving free rein to experts who would know what to do seemed like the best, most efficient course of action. After all, we had money covered by the U.S. Treasury and the Federal Reserve, and war—er, the national defense—was taken care of by the Department of Defense and the Pentagon. The experts will manage these things so the rest of us don’t have to think about them.

The trouble is, Plato’s Republic is a thought experiment, a utopia (another word from the Greek that literally means “no place”) and not a form of government that has ever been tried. Others have suggested ideal societies, like Thomas More’s book of the same name and Karl Marx’s economic and social imaginings. All of them end up creating rational, strictly planned, coercive, and ultimately inhuman societies. You really wouldn’t want to actually live there.1

The trouble with philosopher-kings is that they are still human beings. Sure, they think about truth and beauty and justice, but they still have families, personal needs, and an eye to their own self-interest. Maybe if there were an order of angels or demigods on Earth, who breathe rarified air, eat ambrosia, drink nectar, and have no personal relationships, we might then entrust them with rule as philosopher-kings. These would then be a different order of people, a different race … perhaps a master race?

But such beings don’t exist. And even if we could trust them not to feel selfishness, greed, nepotism, or that little twitch of satisfaction people get when they have the power to order other folks around and maybe humiliate them, just a little bit, that’s still no guarantee that they won’t get crazy ideas, or mount their own hobbyhorses. They are still subject to the narrow focus of academics and other experts, concentrating their thoughts so hard and fast on one form of “truth” or “the good” that they tend to forget competing needs and interests. Experts can, for example, become so enamored of benefits of what they’re proposing that they forget about, tend to minimize, and dismiss the costs of their solutions. They can go off their rocker, too, just like any other human being. People who think too much about abstractions like truth, beauty, and justice tend not to get out among the people who must stretch and scratch for a living.

I’m not saying that all public servants with inside knowledge of the subject under discussion are suspect. Many people try to do the right thing and give good service in their jobs, whether they serve in government, work for a big corporation—as I did in several previous lifetimes—or run a small business. But that expectation is a matter of trust and, yes, opinion. Not everyone is unselfish and dedicated to playing fair.

And the problem, of course, is that under Plato’s model you will have made them philosopher-kings. They have the power. They make the rules. They are in control. And they don’t have to listen to, obey, or even consider the “little people,” the hoi polloi (another Greek word!) because, after all, those kinds of people are not experts and don’t know enough, have all the facts, or deserve to have an opinion.

I’d almost rather follow the governing formula illustrated in Homer’s Iliad, where the kings of all those Greek city-states that went to war were tough men, prime fighters, and notable heroes. That would be like living under rule by the starting offensive line of the local football team: brutish, violent, and hard to overthrow. But at least they wouldn’t be following their fanciful, navel-gazing ideas right off into the clouds, leaving everyone else behind. And they all had—or at least according to Homer—an internal sense of honor and justice, along with a reputation to uphold. So they couldn’t be publicly evil or escape notoriety through anonymity.

No, democracy is a terrible form of government—sloppy, error-prone, and inelegant—but at least you have a chance every so often of throwing out the bums who have screwed things up. No loose-limbed dreamer has come up with anything better.

1. But then, to get things warmed up, this blog is a retelling—perhaps a refashioning, with different insights—of a blog I posted two years ago. Some channels of the mind run deep.

2. In my younger days, we had friends who were still in college—although I had been out in the working world for a couple of years. They thought Mao’s China was a pretty good place, fair and equitable, and that they would be happy there. I didn’t have the heart to tell them that their laid-back, pot-smoking, sometime-student, rather indolent lifestyle, dependent on the largesse of mummy and daddy, would get them about fifteen years of hard labor on the farm in the then-current Chinese society. Maybe the same today.

Sunday, August 6, 2023

On Self-Publishing

Shelf of books

For the last dozen years, I have self-published about ten of my novels, in addition to having been traditionally published through Baen Books in New York. So, I would like to offer you all some of my experiences. But note that I have also worked in two publishing houses—for a university press and a trade book publisher—so I have specialized expertise in copy editing, text conventions, book design, and formatting that other authors may lack.

To self-publish means to sell your books directly to readers through Amazon’s Kindle program, Barnes & Noble’s Nook, and/or Apple’s iTunes, rather than submitting a manuscript to a publishing house, generally through an agent, in order to have your book edited, produced, marketed, and sold. The costs of self-publishing are minimal if you are prepared to do a lot of the work yourself, but they can be quite steep if you rely on the new marketplace of professionals ready to help the self-publisher for a fee.

There is an alternative, of course, the old “vanity press.” In this scenario, a firm would represent itself as a publisher and employ editors, designers, and printers, offering to produce finished copies of your book—for an up-front fee. You would then get a truckload of 3,000 or 5,000 books that you would have to market and distribute yourself. That might take less work on your part—and “acceptance” of your manuscript would be guaranteed—except that you would have to work doubly hard on the distribution end to get rid of all those copies. But traditional publishers are becoming a lot more like a vanity press in that they generally don’t make much effort to market your book. Gone are the days when they would pay to send an author—and certainly never a new author or one with a one-off book—on a “book tour” to market their book with press appearances and radio and television interviews. There’s a whole story about why traditional publishing is dying, but that’s not what you want to hear.

The first step in self-publishing is that you need a manuscript. Unless you are a former book editor, like me, or very good with grammar, punctuation, and style as represented in the Chicago Manual of Style (the bible of the publishing industry), you will need to hire a book editor. Having done the job for several years, I do my own editing, usually as I write, although I rely on a circle of “beta readers” to backstop me on plot holes, inconsistencies, and the occasional typo that gets through re-reading the text a dozen times.1

Book editors may offer structural and substantial advice—things like those plot holes and inconsistencies—or they may just do “period-and-comma” or copy editing. Or sometimes they do both. You can expect to pay a fee per number of words or per page for this service—I’ve heard $10 per page quoted, but that was years ago in another economic time. Prices will vary, as does the quality of the editor’s contribution. But you will probably need an editor, as readers are averse to and give bad reviews to books with obvious holes, errors of fact, inconsistencies, and too-numerous typos.

Once your manuscript is in shape, you need to decide how you will want to publish. In times past (the last ten years or so) self-publishing has meant producing an ebook for online, digital distribution. This involves coding the manuscript in HTML format, dividing it into chapters and other book parts in formal .html file formats, and knitting the whole thing together with a “cascading style sheet” (CSS), a table-of-contents and navigation control file (toc.ncx), and a “manifest” (identifying all those pieces and parts in the folder) into an ePub format. I learned HTML coding, acquired the blank .html file form, learned about ePubs and their parts, and got the small programs that compress the folder accurately and validate the ePub for distribution. And then, about ten years ago, I started making my own ebooks. The software application of choice for producing HTML files is Adobe DreamWeaver—which I only use after I put the basic HTML text treatments (smart quotes, italics, boldface, etc.) into the Word file.2

If you don’t want to do all this, then you can hire a programmer to do it for you—and again, costs and quality vary. I do it myself because I’m finicky, and I control the book’s content and appearance at all times. Also, I can go back and rework the ePub files anytime I find an error without having to apply to a third-party contractor. There are also mechanical means of producing ePubs: some online programs will take your MS Word document or other text and turn it into HTML. But without specifying your embedded formats in a CSS file, they tend to turn every paragraph into a block of text that includes every bit of formatting from Word or other word-processing format in the first 200 to 400 or so characters. That can be extremely confusing to try to go into and edit yourself. And automatic programs sometimes make stupid mistakes. Besides, the formatting you’ve chosen for your manuscript in Word may not be the format and look you want in a published book. Again, I like control.

But that’s for producing ebooks. Kindle and Nook now also offer to produce print-on-demand (POD) paperbacks for you. (I tend to stay away from Apple’s iBooks, because they want you to use their own book formatter, and the end user agreement grants them ownership of the contents—Ewww!) PODs are printed books in the format as “quality” or “trade” paperbacks, with better type and paper than in what we used to call “pulp.” But instead of being printed on a press in a run of 10,000 or more, they are laser-printed and perfect-bound to order when the customer buys the book.

To prepare for POD, you need to create two .pdf (portable document format) files that are sized to your chosen page size. One file is for the internal text, which might be produced on plain white or natural beige or even glossy finished paper; the other is for the cover with front, spine, and back in one file to wrap around the interior pages, and this will be on heavier, shiny or matte finished stock. The software application of choice for producing these .pdf files is Adobe InDesign. To do this yourself, you should be familiar with the mechanics of book design: page numbering, running heads and footers, bastard title, title page, copyright page, etc. (For example, if a verso page is blank following a recto section break, do you give it a running head or do you leave it blank? And do you include the verso page’s putative page number in the numbering of book pages or skip over it?) If you’re not experienced with all this, you need to hire a book designer, which may be different from your HTML programmer and text editor. Or some people may now offer combined services, in one person or as a team.

Having a cover implies cover art. You’ll need a cover image for both the ePub and the POD and for use in displaying your book on the Amazon, B&N, or other distributors’ web pages. I generally find an image I like through istockphoto.com, buy the editorial rights (generally a couple of bucks off my credit account), and work up a title treatment, spine, back blurb, etc. myself. The application of choice for this is Adobe Photoshop. If you’re not skilled at cover design—and I don’t say I’m a skilled artist, just that I do it—then you might hire an artist or see if your preparation team includes this work. Again, quality and costs vary, and hiring someone to paint or compose a unique image for your cover can be quite expensive. In any case, make note of the artwork’s provenance, because you will want to cite it on the copyright page.

Okay, you’ve got all the parts of the ePub and POD together, what comes next? Two things—copyright and International Standard Book Number (ISBN). Authors can self-publish without them, but these protect your rights to the product. The ISBN is issued by Bowker, who used to catalogue every U.S. book in print and has now become the country’s repository of ISBNs through myidentifiers.com. You become the publisher when you register with them, and you buy your blank ISBNs singly or in groups (I get ten at a time for $250). You need a separate ISBN for the ePub and the POD versions because the identifier applies to the book’s format and “edition” (also when a later significant revision—not just fixing a type—or version is issued), even if they all include the same text. Bowker will require you to fill out various information choices on the book’s format, publication date, and general audience details for each version.

Copyright is issued by the U.S. Library of Congress, which works mostly online these days at copyright.gov. Filing copyright for a book online costs $35 (more if you work through mail-in application), and you do this once for all versions, because you are establishing that you own the words and, if applicable, the cover art but not the book design or format. You will then be required to submit the “best” copy of the work, which means two copies of the POD in actual paper, if you are making one, and otherwise the electronic ePub.

Then you publish—and this part is actually “free.” Kindle, Nook, and iTunes don’t charge up-front for their services, but they will require you to register with them and sign a distribution agreement. You are the publisher, not them, and you are liable for actions against the work like defamation and copyright infringement. Your distributors are just making available the book content that you have supplied. Generally, the agreement also states that you will not sell the book for less on another platform, although Kindle Direct Publishing has programs to temporarily reduce the list price by offering specials. You can also install choices, such as digital rights management (DRM) that keep buyers of your ePub from trading the file around with their friends, although some of these services also support limited sharing and some will even let the reader “rent” the book and then return it (Ewww!).

With all of these distributors, you submit the ePub and PDF files (if you are making a POD), along with cover art, blurb, author’s bio, details about the intended market and audience, and keywords to help people find your book.3 In each case, the distributor holds your book electronically and only sends it—or makes a one-off laser print and perfect-bound copy of your POD—when the reader purchases your book. By not holding an inventory of printed books, like traditional publishing, these services get around the inventory taxes that, subsequent to a case called Thor Power Tools in U.S. tax court, have eaten the traditional publishing business alive. They also get around the issue of shipping loads of books to stores on consignment and taking back “returns” or declaring them as “remainders” if the books don’t sell.

The payment you get with publishing through Kindle, Nook, and iTunes is a “royalty” on each sale that might be as much as 65% or 70% or more of the cover price. You can name any price you like, but I prefer to keep mine small. I price my ePubs at $2.99—about what we used to pay for a paperback, back when. My PODs I price at a buck or two above the distributor’s cost to produce, which can be quite high depending on page length and choice of paper. My novels cost about $12 to $15 per copy to produce, compared to just pennies on a printing press doing a run of 10,000 to 20,000 copies. All of these costs are stated when you set up your book, and the price is automatically adjusted for sales to foreign markets, including VATs and other taxes. I’m happy getting a buck or two per book sold, which is better than getting the 5% royalties I received from traditional paperback publishing. But, as in traditional publishing, you still have to do your own promotion and marketing.

Promoting your books is another cost—more in terms of time than money. I maintain this site, www.thomastthomas.com, as my author’s website, which I keep active with blogging, formerly every week and now a couple of times a month. I copy the blogs to a blogging service, www.blogspot.com, to which people can subscribe, and advertise them with images and blurbs on my Goodreads, LinkedIn, Twitter, and Facebook pages. That’s all pretty standard. I probably should be angling for podcasts and book reviews, but I don’t. What the secret may be to successfully marketing your book to a wider audience to win fame, fortune, and acclaim, nobody knows for sure. It generally helps if you are already a celebrity, but who among us is that?

Self-publishing is a process that requires almost as much skill as writing a good book in the first place. You can also find good editors, programmers, page formatters, and artists—or services that offer all four in one package—but they will cost you. And you can also find a number of less-than-good services that prey on the huge market of people who want to get their book published and won’t know the difference.

Since I am basically a one-man-band of self-publishing, I sometimes offer to my friends the option of having me edit and prepare their books for free, because I’m interested in helping them. Or because I’m interested in the project. But I’d rather write my own books than set up in business to do this for a fee.

1. And don’t get me started on autocorrect and the grammar and spelling checks incorporated in most word-processing software like MS Word. They have a limited and mostly obtuse sense of vocabulary and the grammatical and punctuational instincts of a mean-spirited third-grade teacher. Yes, there are rules for these things—and a professional author knows when to break them.

2. Before this, in the decades between my last traditionally published novel and my first ebook, I would turn the MS Word manuscript into a large, Adobe Acrobat .pdf file and offer it for free (“I supply the text, you supply the paper”) on my author’s website.

3. I’ve seen Kindle offer to make an ePub out of a submitted word-processing text or POD files, but I’ve never tried it. Again, mechanical conversions can make mistakes, and I’m picky.

Sunday, July 16, 2023

Muckers

Brain activity

Last time, I wrote about John Brunner’s prophetic vision of a society saturated with digital technology in The Shockwave Rider. Now, I want to address his even scarier vision of social crowding in Stand on Zanzibar. The title refers to the fact that, at the start of the novel, the Earth’s population could be accommodated by standing shoulder to shoulder on the African island of Zanzibar; at the end of the novel, those along the shoreline would be pushed into the water.

One of the background motifs of the novel is the phenomenon of random “muckers.” The term comes from the word “amok” and describes the state of mind of people who commit terrible massacres, not as agents of the state but as unpredictable acts of social violence. In the United States, these would be people shooting up schools or firing into crowds with rifles or pistols; in the rest of the world, they wade in with knives or swords and start hacking and slashing. The carnage continues until someone manages to put down the mucker.

The reason for this running amok is simple: these people have been pushed too far by social pressures, by the complexity of living in a technological world to which they are no longer attached, by the frustration of not getting their most basic needs met, and by growing anger and confusion.

Does this sound familiar?

It isn’t necessarily the guns in America that cause mass shootings—although it is easier to pick up a loaded weapon and pull the trigger than to unsheathe and swing a sword again and again. Still, the will that raises the sword against unaware human flesh or pulls the trigger to tear it apart is different from the mindset of a soldier defeating an enemy or defending his homeland. In either case, the mucker wielding the weapon is fighting demons that don’t exist within the people they are killing. And these are demons that, apparently, existed only in potential form fifty or more years ago when Brunner wrote his book.

I grew up on the East Coast: born in New Jersey, started grammar school on Long Island, then finished and went on to junior high in a suburb of Boston—genteel, urbanized places full of sheltered, middle-class kids. But my grandfather was judge in a small town in central Pennsylvania, and he was also a gun collector. My mother had been a member of her high-school rifle team and a crack shot. The judge taught my brother and me about firearms and gun etiquette by shooting a bee-bee gun in his basement target range. When I started high school myself, after my father was promoted and transferred to Western Pennsylvania, I entered a different world—different from suburban Boston and from anything that exists today.

On a Monday in October—if I remember correctly—I showed up at school, and all the other boys and half the girls were missing from class. When I asked about this, I learned they were out “getting their buck,” because it was the first day of deer season. And that afternoon they started drifting in. The boys would be driving their pickup trucks with rifles visible in the gun rack against the rear window—or they would bring their weapons into the school and stow them in their lockers. And yes, in the mid-’60s, the school still had a rifle team and a range in the basement under the administrative corridor.

These weren’t pellet guns, either, but the .30-06, scoped and accurate to about a quarter mile. This was the civilian version of the military’s M-1 Garand rifle, standard issue for riflemen in World War II. And most of the boys would also have had access to their father’s old service pistols or to souvenir pistols from the European or Pacific theaters in that war. It would have been so easy for any one of them to go up into the woods behind the football field and plink the entire scrimmage line during practice—and take out a couple of cheerleaders, too, before anyone could figure out what was going on.1

They didn’t, of course. They wouldn’t have, because everyone was trained in gun etiquette and took their weapon seriously.2 And they knew their fathers would have tanned their hides if they even joked about it. Besides, much as we were all teenagers, subject to the usual hormonal winds, tantrums, and moods of adolescents, none of us was so angry as to do such an unspeakable thing.

So, what has changed today? Maybe it’s access to weapons by teenagers in urbanized settings who were never taught a gun’s purpose for hunting or defense at need. Maybe it’s the social isolation of looking at screens all day rather than interacting with real, live people, the sort who have feelings and express them in person and in your face. Maybe it’s social crowding, being around too many people with too many demands, but still strangers because they, too, are looking at their screens. Maybe it’s because we’re slipping off the edges of Zanzibar. But I don’t think banning guns and ammunition is the answer. Then the angry people will just drive their cars into a crowd—or pick up a sword.

1. Of course, many of the kids in the suburban schools around Boston and New York would also have had access to souvenir pistols from the war. They didn’t shoot up their schools, either.

2. I remember our classmates ridiculing a young hunter who tried to shoot his deer with a “pumpkin ball,” a hollow lead slug fired from a shotgun. It makes a fist-sized hole on entry and blows out the carcass on exit, destroying the value of the meat. And it’s cruel and stupid. This was a sign of the boy’s bad attitude that encompassed both crazy and mean.

Sunday, July 9, 2023

Shockwave Riders

Immortal dream

John Brunner (1934-95) was a science fiction writer who came, I think, the closest to predicting our current future. He excelled in foretelling the role of the individual in relation to the mass psychosis of crowds. And today we seem to be living in the world of The Shockwave Rider.

In that novel, the main character is a savant with databases, able to use his phone to hack in, write himself a new identity, and move on—usually after a serial catastrophe he has created for himself. He also uses, or manipulates, the people around him to his benefit. In the novel’s opening sequence, he is a preacher running an old-fashioned tent revival with a digital presence. One of his side gigs is operating a Delphi Poll—a concept that I think the current digital world is ready for.

The Delphi Poll is based on the old country-store bean-counting raffle. The clerk would fill a glass jar with dried beans and charge people a certain sum, say a dollar, to guess the number of beans. When the time limit was up, the clerk would open the jar and count the beans, and the person who guessed closest to the actual number would win the pool. In the story, Brunner’s premise is that if you averaged all the guesses of all the players, you would come to the number almost immediately, without having to count the beans. Some fools would guess “two,” and some would guess “a million.” But the majority would instinctively home in on the actual number. In Brunner’s words, applying this tendency of large groups of people in answering online questions, “While nobody knows what’s going on around here, everybody knows what’s going on around here.”

I’ve often wondered why somebody hasn’t started—and made a fortune at—conducting their own Delphi Poll about both esoteric and everyday questions. But I guess the work of political pollsters comes close.

Which brings us to the “work” of today’s online influencers.

I recently read an article about the “Keithadilla.” Apparently, someone on social media proposed making a Chipotle quesadilla with a few extra ingredients. For a lark, the cooks at a local Chipotle franchise bought the ingredients, made the concoction, and sold it to customers. And those who bought it liked it well enough to spread the word. Soon customers all over the country were asking for the Keithadilla and giving their local restaurants one-star reviews if they couldn’t supply it. Soon, Chipotle was forced to make and sell the Keithadilla nationwide and add it to their corporate menu.

Online democracy, or the work of a clever manipulator? You tell me. But John Brunner would have loved it. We are all shockwave riders now.