How an Intelligent Human Ought to Use AI
As we are learning these days, artificial intelligences (large language models, to be specific) are now ready and eager to provide disquisitions on the subject of . . . just about anything.
How is a naturally intelligent human to react to the output of such artificially intelligent machines?
On the occasion of the debut of OpenAI’s GPT 5 the other day, the Berkeley economist Brad deLong took notes as his friend, Adam Farquhar, CEO of Digital Lifecycle Management Ltd., was expounding on human-AI relations.
Then, on his Substack, deLong—presumably by employing AI—transformed Farquahar’s thoughts into a speech of the sort Thucydides might have reported on.
Didn’t see that coming, did you?
But, actually, this turns out to be as good a primer on using AI as we’ve seen—at this stage of AI’s development, before AI gets even smarter.
- Mitchell Stephens
“I stand before you still somewhat astonished by the potency of these new engines of thought—astonished, too, by how often they seem to exceed the effects they ought, by rights, to possess. Time and again they furnish answers, analyses, and even flights of invention whose polish belies the raw circuitry beneath. Yet—let me confess it openly—I no longer trust my intuition to predict just when that brilliance will shine and when it will sputter. I once prized intuition as my surest compass; now, I have laid it aside, acknowledging that my inner gauge is unfit for terrain so novel.
“There was a season when I cautioned all within earshot: “Do not anthropomorphize the computer; you will only mislead yourself.” That counsel has not merely aged—it has inverted. Today I think it is finally time to anthropomorphize the heck out of it. I need to treat the machine as though it were a somewhat eccentric roommate: a companion inclined to fixate on abstruse topics, possessed of unsettling literalism, vulnerable to the occasional non-sequitur, yet blessed with inexhaustible patience and a boundless appetite for our questions.
“Consider: you may pose the same simple query ten times in a row. The machine will oblige, each time returning with an answer—perhaps subtly refined, perhaps wholly recast—never flagging, never annoyed. Challenge its premises, and it may double down with argumentative fervor; redirect its focus, and it pivots without protest. Its encyclopedic recollection astonishes, though we must remember that recollection is not comprehension in the human sense. Like certain friends we have all known—gifted, idiosyncratic, occasionally obtuse—it catalogs facts in profusion but can falter when nuance or context slip beyond its patterned grasp.
“Yet precisely because of those quirks, conversation with such a companion can be fruitful. With patience we learn when to press, when to reinterpret, when to discard a flawed reply and ask anew. We acquire the art of steering an intellect that is at once dazzling and uneven, alien yet uncannily familiar. And in so doing we glimpse the contours of a future in which collaboration with non-human minds will be, not an oddity, but a daily discipline. . . .
“Speak with it, argue with it, learn from it, and figure out how to not teach—it cannot learn as we understand learning—but train and corral it in return. If we manage that balance, we may find that the unexpected power we sense in these systems becomes not a source of unease, but a partner in the work of widening the bounds of human understanding. . . .”