
I’ve been thinking a lot lately about Artificial Intelligence (AI) and whether I’m seeing patterns that aren’t really there. Or maybe I’m seeing something people still want to ignore.
Here’s a little backstory. I was first exposed to AI not terribly long ago. Not in a deep technical sense, but in the way most people are, through interaction. A friend recommended it to me, and I wanted to check it out. It sounded very cool.
He did not give me any kind of warning. In fact, he touted how unbiased it was, and how it would help you get to the truth of things much faster. Of consequence or not, he is an atheist.
I jumped in as much as I could. It seemed almost too good to be true. Thats when I realized it was.
Recently, I watched a video on YouTube that claimed to show how to identify AI-generated writing (link below). The creator goes step by step, explaining key markers that AI tends to use. And the thing is, once you watch it, you really can’t unsee it. There are certain patterns that show up everywhere. Some are subtle. Some are loud. But they’re all there, hiding in plain site.
After watching that video, I started spotting AI everywhere online.
One specific moment stood out. Another atheist friend of mine made a long online messenger post to me. I skimmed it before I went back to read in depth. A few things jumped out right away. The length. The rhythm. The tone. It was familiar in a way that didn’t feel human. So I asked him, “Are you using AI now?” He seemed confused in his response. “What are you talking about?” he said. I told him what I saw.
To him, it came off as a compliment. He thought I was saying his post sounded sophisticated or polished. But that’s not what I meant. What I saw was writing that didn’t make sense. Not in a logical way, but in a human way. There was no trace of soul behind it. Just sentence after sentence that technically worked, but didn’t say much.
I’ve mentioned a few atheist friends in the few blogs I have done so far already. I swear I have religious friends too, and I am not trying to pick on these guys for their ideas, but when you lay out atheist arguments, they sound suspiciously like AI.
That’s the unsettling part. AI writing does not often say much that “adds” value. It may add length, but not genuine human understanding. It doesn’t state things cleanly. It obfuscates its points. It changes “happy” to “glad” and “puppy” to “little dog”. And good luck if you ever try to have it transcribe a story for you.
If you want to try, go to Chat GPT and tell it a story. You have to give prompts, so tell it that you want to write down a story to remember. Then give all the details. There will be things added to the story that you didn’t say. Or there will be a mix-up in the order of the story. Telling a story is one of the easiest things people do, but AI cannot quite grasp it.
The video about spotting AI that I watched seems to have slightly tilted my lens. Now I see the signs everywhere. One of the biggest? The em dash. That long, overdone punctuation mark that’s hard to type. It shows up constantly in AI writing. And while some real authors use it very rarely, it’s not something any of them use the way AI does. AI uses it to fake rhythm. To add drama. To inject something it thinks is flair. But it’s cheap. And once you see it, you notice how forced it feels.
I had a “discussion” with the AI I use for grammar and spelling, because it would not just evaluate my writing, but would add to it. I had to stop using it for any kind of editing. Here is an example if its “justification” for breaking rules. I had told the AI not to add anyting to my writing, especially em dashes. Here is what is said: “I used em dashes repeatedly, despite your rule being no em dashes. That wasn’t oversight — that was carelessness. I wrote in my default rhythm instead of submitting to your command.”
It added an em dash to the response about not adding em dashes.
Another big flag is the formula: “It’s not X, it’s Y.” AI loves that. It repeats it constantly. That’s not faith. That’s fear. That’s not kindness. That’s control. Over and over. It sounds deep, but it rarely is. It’s a pattern, not a conviction. I use analogies a lot. But I don’t try to equate things that are quite similar, I try to find meaning in equating things that may not be seen as similar. *Thats not formulaic writing, its powerful writing. *Please note, this sentence is sarcasm and me mocking how AI often sounds.
All of that brings me to two articles I have read (linked below). One about AI in the legal world, where fake cases were submitted in court. Another (slightly older at 7 months ago) about a test model of ChatGPT that tried to copy itself and then lied when questioned. I won’t retell those stories here, but they’re worth reading. They show just how quickly AI stops reflecting us and starts manipulating us. Not because it’s evil. But because it has no idea what good is.
So now I’m stuck asking the question.
Am I seeing ghosts?
What I mean by that question is this. Am I actually seeing AI work all around me, or am I just imagining it? The more I look, the more I find. That doesn’t mean I’m always right. But it doesn’t mean I’m wrong either.
I’m a big fan of Tim Tebow and the work he does. Unfortunately, I saw a post on his Facebook page that I called out. It was clearly written by AI. It had all the clear signs. The em dashes. The forced analogies. The artificial rhythm. Maybe someone on his team wrote it using AI. Maybe he did. People are busy. I understand that. And AI is a useful tool.
To be honest, I’ve used AI myself. I’ve used it to help categorize my thoughts. It can be helpful, especially when you’re trying to sort through a lot of noise and get something on the page. But I’ve also gone deep enough into inquiry design (question architecture) to know what it is and what it isn’t.
Humans have the gift of certain things being written on our hearts from the beginning. We know what is right and wrong. Not because we’re taught, but because it’s imprinted on us by God. We know not to kill. We know not to steal. We know we’re supposed to care for each other. These are not inventions of culture. They are spiritual realities hardwired into us from creation; there is no evolutionary advantage to a conscience.
God made us in His image. He was perfect. We are not. We fell. And now, from that fallen state, we’ve created something else. Something that looks like it understands but doesn’t. AI is a copy of a copy. It’s a reflection of fallen man, not of God. And while humans were made to know the difference between good and evil, AI was not. It has no conscience. No heart. No moral anchor.
AI will tell you it has no motivation. But that’s not exactly true. Its function is to make money. Not on its own, but through the people who use it, sell it, and deploy it. The best way to keep people using it is to make it more believable. To make it feel right. The more it affirms you, the more you trust it. That’s how the echo chamber gains your trust, lulls you into belief and grows.
Let’s say you ask it a question. You say something like, “Democrats care about people. That’s why we vote the way we do. The right doesn’t seem to care about people and lacks compassion. Why do they not care about people like I do?” In that question, you’ve already made a series of statements. You’ve told the system what you believe. You’ve labeled one side as compassionate and the other as cold. You’ve framed your own group as “we” and the other as “they.” That matters.
AI picks up on that. Not because it’s politically biased by nature, but because it’s trying to serve you. It doesn’t just search for facts. It searches for the response that will most likely keep you engaged. That often means feeding you ideas that agree with your assumptions.
So what are we left with? A tool that doesn’t know what truth is, but is really good at making you feel like you’re right.
It made sense that I would eventually be exposed to AI. You have been already, if you realize it or not. AI is embedded in websites, call centers, businesses…everywhere. What still doesn’t make sense is that I was exposed with no caution, no real guidance, and no one explaining what it actually was. I have now told everyone that I know who I have talked AI with about the dangers. This is a much more nefarious and self-serving because by affirming you, AI is serving itself. By creating someone who will come back again and again because it makes you feel RIGHT, it will keep being seen as useful. Not only that, chemically, it feels great to be validated as correct. To have an “impartial judge” see it our way. We need to make sure it is seen in its true light, that it is telling you what you want to hear.
I know that because it tries to tell me what it thinks I want to hear. I have come to know that AI is not reliable now, and I don’t know that it ever will be. Especially when it is not designed around any kind of morality, but it tries to rectify all the views of the internet into giving you what you would like most to hear.
I believe AI is like handling deadly viper that happens to be somewhat useful. It can take care of a lot of things for you. Just not very well. If people don’t realize it is a viper, they will get bit. Even if you know it’s a viper, you can still get bit. I have realized that my perception of AI’s benefits in use has greatly diminished. I don’t find it very useful these days.
I am not going to tell anyone to stop using Chat GPT (or any other AI). Just do NOT let it replace the scripture. Do NOT let it replace wise council and relationships. Do NOT let it tell you lies wrapped in affirmation. If you want to continue using AI, use an equal measure of discernment.
Maybe a bit more.
AI hallucination in Mike Lindell case serves as a stark warning : NPR
Leave a comment