Blog By ChatBot?

Photo by Markus Spiske on

One of the most exciting and disturbing developments that has come to light in the last year is the development of highly sophisticated artificial intelligence “chatbots” capable of producing text often indistinguishable from and sometimes superior to that of human beings, across a variety of subjects. The most prominent of these is ChatGPT, developed by OpenAI. ChatGPT stands for Chat Generative Pre-Trained Transformer. It was released in prototype version November 30, 2022, resulting in an explosion of discussion.

ChatGPT will respond to a question or command. For esample, I asked ChatGPT “Who was Louis Bromfield?” ChatGPT responded:

Louis Bromfield was an American author and conservationist, born in Ohio in 1896. He was a successful and popular writer in the 1920s and 1930s, producing a number of novels, including the Pulitzer Prize-winning “Early Autumn” (1926). He was also known for his conservation work and was a pioneer in sustainable agriculture and land management practices.

In the 1930s, Bromfield established an experimental farm in Ohio, known as Malabar Farm, where he implemented his ideas about conservation and sustainable agriculture. He wrote several books about his experiences on the farm, including “Malabar Farm” (1947) and “Pleasant Valley” (1945), which detailed his efforts to preserve the natural environment and promote sustainable farming practices.

Bromfield’s work had a significant impact on the fields of agriculture and conservation, and his legacy continues to influence modern thinking about sustainable land management and environmental protection.

ChatGPT, generated 2/2/2023 at 7:38pm

I subsequently asked it “Say more about his writing and how its works are evaluated.” It produced a response focusing on his novels but says little about his later works which were generally considered inferior, churned out to fund his farming efforts. So, I asked it about his work at Malabar Farm in Mansfield, Ohio, which I have visited several times. It described his pioneering efforts in sustainable agriculture and described the farm as financially successful. The latter is a more dubious claim. Bromfield was constantly trying to raise money from writing and movie deals to make ends meet.

What is concerning to many is that ChatGPT generates responses that are often factual and crisply written, as in the example above. Can you see how this might be used by students pressed for time, or just not that eager to put in the work to write a term paper? There is a good deal of concerned conversation in academic circles about the implications for everything from admissions essays to term papers to academic articles.

It is also being suggested that ChatGPT and other chatbots could put writers out of business in many fields. It can write advertising blurbs and in some genres, especially those with fast turnaround times, novelist are using ChatGPT to at least help produce content.

One the positive side, a case can be made that ChatGPT is another tool, like Wikipedia or Google, to quickly research a subject. People are already talking about the kinds of tasks that could be done by a chatbot. This could improve productivity, although it may mean that many writing jobs may be on the line. It’s probably time for those who write in some form for a living to ask, “can a chatbot do that?”

At the same time, there are problems with blind reliance on chatbots. At its most basic, OpenAI warns, “May occasionally generate incorrect information.” I indicated some instances in the examples above where the responses were questionable. I also asked ChatGPT for citations supporting its responses and it said it could not do that but recommended several works, none of which, as far as I can ascertain, actually exist. This problem has been widely noted as has a tendency of ChatGPT to “hallucinate,” to create totally fanciful responses. ChatGPT also warns:

  • May occasionally produce harmful instructions or biased content
  • Limited knowledge of world and events after 2021

Teachers are already developing assignments that recognize the limitations of ChatGPT and testing those assignments on ChatGPT. Others are assuming its use and integrating it into the composition process. The use of ChatGBT actually offers an interesting exercise in how one verifies the veracity of information. Since the sources on which ChatGBT draws aren’t transparent, the writer will want to look at other reputable sources, and this helps teach discrimination between reputable and questionable sources.

The biggest problem, of course, is plagiarism. To use a ChatGBT product as verbatim, or substantially, without quotation marks or attribution, representing it as one’s own ideas, is plagiarism. TechCrunch reports that OpenAI has even created a tool to detect AI generated, including ChatGPT generated, text. At present, it is only 26 percent effective. Now the interesting question comes, how does one handle works that do cite ChatGPT? I suspect that syllabi and specific assignments will have to address this, as well as the consequences for Chatbot plagiarism.

So, what about blogs by chatbot? I tried this experiment. I asked ChatGPT to review Alex Haley’s Roots. I’ve never written a review of this book. It produced the following:

“Roots: The Saga of an American Family” by Alex Haley is a Pulitzer Prize-winning book and television miniseries that was first published in 1976. It is a historical novel that traces the journey of an African man named Kunta Kinte and seven generations of his descendants, from his capture and enslavement in Africa, to his life in America and the eventual liberation of his descendants. The book was based on Haley’s own genealogical research and oral history from his family, and it was groundbreaking in its depiction of the African American experience. It sparked a renewed interest in genealogy and African American history, and has been widely praised for its impact on American culture and society.

ChatGPT, generated 2/2/2023 at 7:00 pm

That’s a nice, succinct summary, but not nearly long enough for my blog reviews. So I tried this “Review in the style of” ChatGPT produced this:

“Roots: The Saga of an American Family” by Alex Haley is a magnificent piece of literature that tells the story of one African American family’s journey from slavery to freedom. This book is a powerful and emotional read that provides a vivid representation of the horrors of slavery, as well as the resilience of the human spirit.

The narrative follows the life of Kunta Kinte, an African man who is captured and sold into slavery, and traces the experiences of his descendants over several generations. The character development is excellent, with each character being unique and memorable in their own way. Haley’s writing is both descriptive and evocative, painting a vivid picture of the time period and bringing the story to life.

One of the most significant aspects of “Roots” is its depiction of slavery, which is graphic, honest, and heart-wrenching. The book serves as a reminder of the atrocities committed against Africans and African Americans, and it’s a testament to the strength and resilience of the human spirit.

In conclusion, “Roots” is a landmark work of African American literature that should be read by all. Its impact on American culture and society cannot be overstated, and it remains an important and relevant book to this day. Highly recommended!

ChatGPT, generated 2/2/2023 at 7:02 pm

Hmm. That does sound a bit more like what I do–still shorter and less detailed than what I usually write. I rarely use words like “emotional read” or “Highly recommended.” I do like words like “resilience” and “evocative.” It does follow the form of many of my reviews, although it feels more “generic” than what I might write. And I found no typos, which can rarely be said of me. It would be interesting to know what others think.

But I won’t be turning to ChatGPT to write my reviews or other blog posts. Here’s why:

  • Fundamentally, I like crafting my own reviews. I write them to remember what I read.
  • ChatGPT cannot know by itself how I actually engaged with a book–what I thought, how I reacted–it can only summarize the content and mimic my style. There is a serendipitous element to all this–I often discover what I think as I write.
  • For better or worse, my writing cannot help but reflect my life experiences and my deepest values, neither of which are static.
  • Part of the implicit contract with both publisher and public is honesty. Will ChatGPT say that a book just doesn’t cut it?
  • No one needs to visit Bob on Books to know what ChatGBT thinks of a book (and I would have to be honest in attribution).

Chatbots are here to stay. We’ve been using Seri and Alexa for years. We engage them when we seek online support. And the new AI of ChatGBT is becoming increasingly successful at simulating our human neural networks while accessing vast sources of information and articulating its responses in reasonably interesting text. And those mistakes? AI is iterative, which means it learns and corrects those mistakes. I asked ChatGBT about something I’m very familiar with, the history of Youngstown. It was accurate, except it said that John Young was from Pennsylvania, when in fact, he was from Whitestown, New York. I corrected this and then asked where John Young was from and it gave the correct answer. Part of the reason for the free release of this powerful tool was to learn and refine it through user interactions like the ones I’ve recorded here.

But I won’t be using it to write my blog. I’ll keep reading the books, keep writing about things that interest me, and keep researching, checking sources against sources when I can. That doesn’t guarantee that I won’t make mistakes, but they’ll be mine, and part of my own iterative learning process, part of my lifelong formation. I will not delegate that to a chatbot.

4 thoughts on “Blog By ChatBot?

  1. Will you get a balanced response to opinion-oriented questions, or will it be solely what the Chatbot wants you to believe? I respect your opinions and views on issues that are discussed in this forum. A Chatbot can be hacked, but never respected.

    Liked by 2 people

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.