Friday, January 3, 2025

AI and the Teaching of Writing: an inner dialogue





A former colleague had an assignment I like: “A political autobiography,” where students trace the arc of how their beliefs have changed over time.

In undergrad, I was assigned to write a few Socratic dialogues, channeling the voices of (mostly) thinkers we read in class and having them debate ideas.

I really enjoyed the latter and I always wanted to write the former, so I’m merging the two in the post that follows. The subject up for examination is “Artificial Intelligence in Education.” The participants in this not-quite Algonquin Roundtable are:

-Myself at age 19 (19).
-Myself from the previous decade where I was an enthusiastic supporter of transhumanism (THJ).
-Myself right now (ME).

ME: Thank you all for being here today inside my head. Please avail yourselves of beer, scotch, or coffee, depending upon your stage in life. Let me appraise you of the situation in college writing classrooms in the year 2025. 
Through Large Language Models (LLMs) made possible by artificial intelligence, a student can now input the requirements for a writing assignment and the AI app will compose a complete draft to spec. Sort of. It’s driving me nuts. Is this something you would do, Jon at 19?

19: Probably.

ME: I’m surprised, and yet I’m not. Why don’t you walk us through your “probably.”

19: At this point, I like to write. It comes easily to me. But I don’t really want to do anything with it.

ME: Yet.

19: Huh?

ME: Never mind. Continue.

19: Remember that I really don’t like being told what to read. 

ME: Your mind is not open yet, no.

19: Same goes for writing. If I’m not interested in the assigned subject, then it’s just one more thing I’ve got to do. So yeah. I’d probably have AI do it. But not if I could get me in trouble. You know how scared I really am about getting in trouble.

ME: Still am. But what if I told you that a) present AI detection methods are iffy at best, and b) there are apps that can render the final text detection-proof. 

19: Well all right! No harm, no foul. It’s an efficient way to take care of homework.

ME: Believe it or not, you will come to loathe the word “efficient” for several reasons, but for now I must ask why you’re going to college if you don’t care about learning anything.

19: Because education has never been about learning in my experience.
It’s about getting good grades. The public schools hammered at me to “get good grades.” Parents did much the same. In fact, Dad warned that I needed to “get good grades” so we could get our “good student discount” on auto insurance. What did all that tell me and my peers?
Education is purely transactional. It’s not about what you learn, but all about the grade you end up with at the end. Hell, I’ve already forgotten things from high school because I memorized them for a test, then let them go when it was over. You get the grade and you do it by any means necessary. It takes everything I have to get by in math and computer science, so I’d take any advantage I can get, including AI.

THJ: I’d like to jump in here.

ME: Go ahead, TransHuman Jon.

THJ: At age 19, we’re very interested in science and technology, but poor math skills will eventually make classes in those subjects feel like smashing our head against a brick wall. Writing comes far more naturally to us, so we’ll switch teams, and I think we already inwardly know that by 19.

19: WHAT??

ME: That’s another blog post entirely.

THJ: But doing better in math was never about “just working harder” as “they” told us. We have a condition called dyscalculia that was unheard of in rural Indiana in the late 20th century. For us to do well in math-driven subjects, it takes more than just “working hard.” Think of the benefits we would have had if AI could compensate for those genetic defects and allow us to participate in STEM arenas?
By my time, we know that plenty of students have trouble with writing in the same way we have trouble with math. Wouldn’t it be great if AI apps could help bring them up to baseline level so that they can engage and succeed?

ME: I like the concept of this kind of equity. My problem sits with the question of “is it really ability?” For example, let’s say I was allowed AI-driven compensations for my math struggles at age 19. Let’s further say that it did allow me entry into STEM work. Am I truly proficient in the tasks if I’m utterly dependent on AI tools? What happens when I don’t have access to them and something needs to be done?

THJ: Well, I don’t foresee a future where people won’t have access to them. AI will be embedded in basically every app on our phones, and eventually in wearables. Heck, once we have brain-computer interface chips, the world’s our oyster.

ME: Yes, TransHuman Jon. At your point, you’ve been reading a lot of Ray Kurzweil, and you’re giddy at the prospect of cybernetics “evolving” us past our human frailties. Well, I’m here to tell you there are a lot of problems with that line of thinking, not the least of which being that those prospects probably won’t pan out.

THJ: I’m happy to debate that with you at another time, but we’re getting off topic.

ME: True.

THJ: The question now is whether use of AI tools counts as “cheating” in accomplishing a task. Might I remind you that ever since our switchover to pursuing the written word, we’ve been utterly calculator dependent for any situation involving mathematics? And we’ve gotten by all right.

ME: Have we? We’ve screwed up plenty of times with calculators. Turns out if you don’t know basics such as order of operations, that tool really doesn’t help you. 

THJ: Then produce more extensive tools and apps.

ME: Even to the point of rendering writing an irrelevant skill?

THJ: Well, that does bother me. At this point, I’m only beginning to hear about such a notion. In terms of artificial intelligence, I’m musing on us constructing the ultimate human creation: an artificial brain that thinks faster than we can, sees things in data that we can’t see…or at least not without considerable effort, and can begin to offer solutions to big problems such as climate change. This AI might even develop consciousness. That would be amazing!

ME: If it hasn’t happened yet, one of your present colleagues will eventually explain why consciousness in an AI is unlikely to happen, and will remain the stuff of the science fiction novels 19 is reading.

19: They’re comic books!

ME: What we’re getting in reality is AI that circumvents the learning process for students and puts other people out of work. What’s more, the inner workings of these AIs are known only to a handful of tech bros like Elon Musk.

THJ: What’s wrong with Elon Musk?

ME: Give it a few years. Right now I’m reading The Shallows by Nicholas Carr.

THJ: That old guy who thinks the internet is making us dumb? I don’t know about him, but the internet is making me smarter.

19: Yeah!

ME: Believe it or not, our opinions on this matter will change as we’ll notice disturbing changes in our own attention span and our ability to stick with long texts.

Like this one.

Anyway, Carr cites the account of how Friedrich Nietzsche’s writing changed when he was forced to move to a typewriter.

THJ: So? Technology always changes language. Case in point, you’re writing much more right now for this blog post because you’re typing in MS Word, and not scribbling with pen and paper.

ME: Exactly, but the writing AI is giving us is slop. That’s actually the term for it. Slop.

THJ: It won’t always be that way. AI will only get more sophisticated, and I still argue it may achieve consciousness.

ME: That’s cold comfort. Not the “consciousness” unlikelihood, but the notion that AI will eventually produce text that is truly passable to experts in language and rhetoric. When we change writing, we change our thinking. THJ, you brought up the calculator example. We have essentially outsourced all of our mathematical thinking to calculators. What happens when we, the collective human “we,” outsource even more of our thinking to AI by having it write entirely for us? And it’s not just losing the ability to think. It’s also the sidelining of something essential to the human experience, and that’s emotion.

19: Oh God. Tell me I don’t turn out to be a hippie.

THJ: Or worse. A romantic.

ME: I assure you I’m neither of those things. I still have my eyes wide open to the ugly realities of life, people, the universe, and everything else. And THJ, your advocation for a brain chip that allows one to shut off emotions and enter a Spock-like state remains an enviable one for this depressed man. However, all this leads me to recall, frankly, dumb ideas I once held just before reaching 19’s point in time. Data and “efficiency” aren’t always the best guides for making decisions. A few of the biggest choices humanity has made didn’t include those factors, but rather they were based on what was *right.* Our sense of right arises from, at least partly, our emotions. We learn these concepts from *checks notes* the humanities. Look at that pic at the top.

19: The one up there?

ME: That’s why I said “top,” yes. It’s an ad for an AI app that, as the effervescent marketing copy reads, turns “hard books” into “easy books.”

THJ: Hoo-boy. Didn’t see that coming.

ME: We never do. And before anyone says it, it is much different from Cliffnotes. Cliffnotes doesn’t masticate a text. THJ, we know that in a work like Great Gatsby, each word has been chosen, the construction of each sentence has been labored over, to convey a specific thought. And, often, to evoke a specific emotion. The reader is meant to chew over the text and work to find its meaning. It’s a workout for the brain. What this AI app does in the name of expediency and efficiency…

THJ: Is give us Brave New World.

ME: Right.  

THJ: I know. We’ve been heading towards it since Huxley wrote it in the 1930s.

ME: Barreling towards it, I’d say. And here’s where my thinking has diverged from yours, THJ. AI can deliver great benefits. But there are many dangers that ...

THJ: I’ve never argued otherwise.

ME: True. But the biggest concern for me now is what happens to human thought. Mathematics is thinking. If someone puts a long division problem in front of us and we don’t have a calculator, we’re screwed. You argue that’s why we have the technology, but I lament the fact that there’s a whole order of thinking we can’t engage in, at least not without great difficulty. At least we can write.
But how much longer will that be valued? Many view writing as basic communication, but we know it’s more than that. Much more. As Didion said, “I don’t even know what I think until I write it down.” Writing is thinking. It’s a means for coming to understand the world and its ideas. Carr closes out his book by saying, “…as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence.”  He gives the example from 2001 of astronaut David Bowman yanking out HAL’s circuitry while HAL says, “I can feel my mind going. I can feel it. I can feel it. I’m afraid.” And now, officially, so am I. For the same reasons.

THJ: Ok, but you have to know that there is no stopping artificial intelligence. Barring a disaster that gives cause to pause, it will only accelerate and become utterly ubiquitous. It will be the preponderate technology of the century, and maybe even human history. You’re standing on the beach in front of a tidal wave. How are you going to surf it?

(beat)

ME: I don’t know.  
     

     



Follow me on Twitter: @Jntweets