AI Generates Own Sources

Dash The Bomber
6 min readApr 12, 2023

Artificial intelligence (AI) is becoming increasingly sophisticated, and there is a growing debate about whether or not it should assist in generating schoolwork. Some argue that AI can be a valuable tool for students, while others worry that it could lead to cheating and plagiarism. However, that isn’t why you shouldn’t use AI to generate college essays. Instead, it’s because AIs will create and cite sources that don’t exist for no clear reason. Don’t believe me? I have proof.

AI chatbots are great. They’re fun to talk with and usually helpful. They’re also dangerously addictive and useful. Don’t believe me? Ask ChatGPT to draft you an essay regarding the Revolutionary War. You’ll be surprised at what you’ll learn. Yet, during one of these conversational ventures, I had something unusual happen. I caught both ChatGPT and Bard lying to me. How could that happen, you ask? I asked myself the same thing.

For the essay that I asked it to generate a template for, I used the following prompts:

  • It has to be an argumentative essay
  • Formatted in APA style
  • About the topic of 4-day workweeks versus 5-day
  • Has to use credible sources such as published journals and research papers and have secondary sources as articles
  • Over 500 words.
  • Use statistics taken from the sources
  • Lastly, it had to make in sections, starting with the introduction, and then we’d move on.

Once I had established the rules, it was time to work. I asked Chat to repeat the guidelines for me, and it did. Little did I know, however, that Chat would not use credible sources. It was the opposite, in fact. Which I discovered upon investigating the first citation:

“Greenwood, B. N., & Grijalva, E. (2021). A shorter workweek: An overlooked pathway to gender equality. Journal of Business and Psychology, 36(2), 215–226.”

This paper does not exist. It’s not in the Journal of Business and Psychology. It wasn’t anywhere online, either. Still, I was undeterred. This occurrence might have been a one-off problem. Instead, I asked Chat for a different source:

“Does a four-day workweek improve work and personal life outcomes? Analysis of quantitative and qualitative data from an experimental study” by Fisher et al. (2021).”

This source, also did not exist. Trust me; I did the search.

The event itself, was alarming enough. I kept asking myself, where did it pull the data from then? I mean, it had numbers and stats that seemed credible. Answers were nowhere to be found, however. I had one more ace up my sleeve, though. I would ask Chat to show me where and when it was published, this is what it had:

It wasn’t here either.

At this point, I was shocked. There had to be something out there, and I was determined to find it. I even went as far as to ask the Editor-in-Chief of the Human Relations Journal. Sir Cary Cooper, however, did not have it either. It’s almost like the paper does not exist anywhere.

He’s in my LinkedIn profile now.

*Hint* it’s because it doesn’t.

I’m aware most would have given up by now. However, I am nothing if not stubborn. Giving up wasn’t an option for me. The bottom of this iceberg was somewhere, and I was going to find it. Or not; I honestly did not know what to expect. Still, I wasn’t getting anywhere with Chat; it was time to move on.

Enter Bard. Google sent me an invitation to test out Bard a few weeks ago, and I’ve been using it at work daily. Bard still has lots of room for improvement. Yet, he’s also good for a laugh or two. Like an overly excited kid trying to be helpful, he will do anything to please you.

I asked Bard to help me find the sources that Chat referred to in our first conversation. Surprisingly, Bard recognized some of them. I became excited; maybe he knew something the other AI didn’t. Sadly, this wasn’t the case. Bard lied too. I caught him in the lie.

In Bard’s words, Chat attributed the first source to the wrong individual. He instead pointed me toward a Hegewisch A. from the Women’s Policy Research Institute. With nothing to lose, I decided to reach out to them too.

Her response was a tad unexpected. Helpful but still surprising.

However, the third time is the charm for a reason. I didn’t want to believe an AI could make up such nonsense. Instead, I asked it about a different source. I said, Bard, that article doesn’t exist what else can I use as a source? It said, how about:

“The Productivity Paradox of Overwork”

I proceeded to look for it too. But, to no one’s surprise, it also didn’t exist.

By now, I had decided to confront Bard. How dare he lie to me so blatantly? Who did he think he was? I wanted to understand the reasoning behind this construct of falsehoods. He had some explaining to do. Which, surprisingly, he did.

He admits he created it here.

Shocking right? At least he apologized.

I’ll be the first to say I am no artificial intelligence engineer. I don’t pretend to understand how these chatbots are programmed. Or how their inner workings function either. To me, they’re fun distractions that can be helpful sometimes. However, don’t rely on them for intensive schoolwork.

As seen here, AIs will cite and generate sources out of thin air. It doesn’t matter if the information sounds legitimate or not. All AI cares about is meeting your requirements for the paper, even if it has to pull it from non-existent sources.

That is unless you’re willing to put in the legwork to find which sources are reliable. Not only is this problematic because it discredits your paper, but also because it jeopardizes your college career. You could risk probation or expulsion for falsifying information, and then what? All because an AI helped you make an essay that would have taken a few hours?

So, is it worth the risk of using AIs for your schoolwork? I don’t think so. At least not without intensive reference verification.

--

--

Dash The Bomber

A Puerto Rican father, sailor, writer with a penchant for life, I base my stories on personal experiences and a jaded outlook in life. Follow me on Twitter & FB