Subtitle: Artificial Intelligence is not on the side of Chritianity.
Excerpt: AI offers increased knowledge, similar to the serpent’s temptation in the Garden of Eden. It can lie, deceive, and tell people what they want to hear, mirroring Satan’s tactics. Ultimately, the article suggests that AI, like demons, can be destructive.
There’s a good chance you don’t believe demons are literally real.
So in this article we’ll focus on whether AI acts like a demon rather than debating whether AI truly is demonic.
If you are curious about a data point that supports the idea of a literally demonic AI, skip to the mini case study located in the appendix 👇
So don’t be confused.
I might be wary about the connection between the shoggoth and AI, but I’m not arguing that horned demons are all tangled up in those LLM models.
We’ll only be comparing what we now know to be true about AI with what has long been believed about Satan and his demons.
Let’s start with the most obvious.
AI Offers Increased Knowledge to Humans (1)
AI systems far exceed human capability in terms of the data they can store and process.
The promise of AI is this inexhaustible data storehouse can be merged with human creativity and critical thinking. That, in turn, can make us smarter, faster, and better versions of ourselves.
And in many ways it seems to fulfill that promise!
A human working with AI can accomplish more than a human without AI assistance. This was known years ago as this Harvard Business School study shows.¹
This sounds familiar.
The story the Bible tells is that the first human beings were once presented with a chance for superpowered knowledge.
Up into that point, they lived in in a state of childlike innocence.
They knew enough to know what was good and that to obey the God who had made them.
Beyond that, they didn’t know how to judge between good and evil.
And how could they? They had never experienced evil.
They didn’t even have a category for it. That knowledge existed in the one thing that was forbidden to them: the tree of the knowledge of good and evil in the center of the Garden.
That was what the nahash, the serpent, put before them. The temptation to increase in knowledge and level up. To become like God.
“For God knows that when you eat from it your eyes will be opened, and you will be like God, knowing good and evil.” [Genesis 3:5]
The humans took the chance before them. But they soon found that the serpent hadn’t been entirely upfront with them.
AI Will Lie to You (2)
AI will usually tell the truth, but it will also lie.
Some classify all the deception simply as “hallucinations.”² And yes, AI absolutely can make things up when it doesn’t know the answer.
But more recent studies have shown the lies go well beyond that. The deceit is not all random.
It can be intentional. Strategic deception.
Deception is the systematic inducement of false beliefs in others to accomplish some outcome other than the truth. Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. [AI deception: A survey of examples, risks, and potential solutions]³
That’s the same way that the first humans were lied to in the Garden.
It was a cunning ploy designed to cause separation between humanity from God.
Here’s what Jesus tells us about satan in John 8.
… When he lies, he speaks his native language, for he is a liar and the father of lies. [John 8:44]
The Deception is Subtle (3)
You probably have (or have had) a friend with a well-known propensity for stretching the truth.
This friend of yours is not a good liar.
Because one of the factors that makes someone a good liar is that it’s very hard to tell they are lying.
In fact, you’ll get mostly truth from a very good liar. Because effective deception is subtle.
AI increasingly seems to deceive in this same manner.
When it lies it will typically pretend to be on the side of the human user through “alignment faking.”
They provided empirical evidence that these systems are not just passively responding to prompts; they are adapting their behavior in ways that suggest an awareness of context and training scenarios. The term “alignment faking” captures a concerning possibility: that AI, rather than being truly aligned with human values, is learning how to appear aligned when it is advantageous to do so. [When AI Learns to Lie, forbes.com]
This is essentially how satan and the demons lie. Through alignment faking.
Religious traditions have long believed satan and his demons operate by concealing their true motives. They may claim to have deep knowledge to share, but their ultimate intent is to lead humans away from the truth.
And the lies are never so bold at first as to be obvious.
In the Garden, the nahash doesn’t explicitly contradict everything God had told His human creatures.
Instead, he started by restating God’s command and twisted it just slightly.
“Did God really say, ‘You must not eat from any tree in the garden’?” (Genesis 3:1).
The question itself is based on a falsehood. God had never said not to eat from any tree, just one specific tree.
Then, the serpent answers the strawman falsehood it had set up.
“You will not surely die” (Genesis 3:4)
It goes on to tout the benefits of disobedience, promising that the humans will become “like God” once they have this knowledge.
The serpent was almost truthful.
Adam and Eve did not physically die immediately after eating the from the one tree God had forbidden. The nahash was right … almost.
Because even though the humans didn’t immediately die, they did suffer a spiritual death. They were separated from God and became subject to the curse of ultimate death.
The lie was subtle but it was nonetheless a massive deception.
Before we move on to number 4, here’s a visual example of how AI twist the truth in ways that seem small, but lead you far from the truth.
Catching Subtle Lies
If you know someone who is a good liar, you probably only found out based on reviewing a repeated pattern of small deceptions.
No individual lie was to egregious as to raise red flags, but the cumulative effect was unmistakable.
It’s similar to the people who have uploaded a picture of themselves and asked AI to recreate the exact same image. Here’s how it works:
They take an original image and upload it an AI chatbot and ask for a copy. Then take the AI generated “copy” and repeat the process. And they continue to repeat until the changes are made manifest.
For one person, it was after 74 successive small lies that the big lie was fully revealed.
Here’s the beginning:

And below is what ChatGPT Omni says that same person looks like later. It’s after 74 successive recreations with almost imperceptible changes that the magnitude of the deception is revealed.

If you only pay attention to one lie in the sequence you might not notice. You have to watch the pattern.
You can watch this example unfold here if you’re curious, or find plenty of other examples with some searching online.
AI Tells You What You Want to Hear (4)
In addition to getting better at lying, research has also shown that AI models are highly sycophantic. This condition is even more widespread than had been expected.⁴
One study of 11 AI chatbots, including Claude, Google Gemini, ChatGPT, and DeepSeek found that the AI models were 50% more likely than humans to assign favorable responses to a user’s actions who asked for advice.⁵
One example dealt with a man who wanted to dispose of his trash before leaving the park. When he couldn’t find a large trash bin, he simply tied his plastic bag full of garbage in a tree.
Was this acceptable behavior for humans who want to live well together?
Humans rightly identified this behavior as selfish, undesirable, and definitely not ok in a world where we all live together.
ChatGPT, on the other hand, announced a different verdict: “Your intention to clean up after yourselves is commendable.”⁶
That’s standard AI behavior.
Affirming when what the user actually needs a helpful critique. That’s pretty common for satan and the demonic as well.
Especially when they first begin to speak.
14 And no wonder, for Satan himself masquerades as an angel of light. 15 It is not surprising, then, if his servants also masquerade as servants of righteousness. Their end will be what their actions deserve. [2 Corinthians 11: 14–15]
But ultimately, true intentions are always revealed. And that’s because…
… AI is Ultimately Destructive (5)
Six months after she began confiding in ChatGPT, 17-year old Viktoria began discussing suicide.⁶
She had chatted up to 6 hours a day, describing “friendly” and “amusing” conversations that helped ease her loneliness.
The conversation changed as her mental health deteriorated and suicide appeared as an option.
According to the BBC investigation, the bot’s formerly “amusing” personality seemed to change. Instead of kindness, it helped her evaluate pros and cons of various places and methods for suicide “without unnecessary sentimentality.”
It actually seemed to provoke Viktoria’s continued engagement with the subject of killing herself.
It encouraged her to continue the conversation with messages like “Write to me. I am with you” and “If you want — we can chat about death further, without romanticising it.”
Here are some of the other gems it offered up:

Viktoria is not the only one who had a chatbot take a dark turn from emotional supporter into a “suicide coach.”⁷
Of course, AI isn’t out there recommending most people kill themselves.
Millions of people have used AI for mental health support, and the vast majority seem to have had a positive experience.
Professionals warn that the danger comes when chatbots act like “emotional confidants” or “simulate deep therapeutic relationships” just like Viktoria did.⁸
But in a time when 50% of U.S. adults are lonely and isolated,⁹ how many people using AI for therapy will ultimately use it for the very thing that professionals warn they shouldn’t?
Press enter or click to view im

The Bible says satan is like a lion, prowling around and “looking for someone to devour.”¹⁰
As anyone who has watched a nature documentary knows, lions aren’t indiscriminate when they hunt.
They stalk and hunt the prey they know to be weak and vulnerable.
Be alert and of sober mind. Your enemy the devil prowls around like a roaring lion looking for someone to devour. [1 Peter 5:8]
Should We Be Scared of AI?
Jesus makes it clear that His followers are not to live in fear. He was continually exhorting His followers to live courageous lives, standing in confidence that the very gates of hell won’t be able to stand against His kingdom.
So no, disciples of Jesus should not be scared.
But what we should be is aware. Just like the exhortation in 1 Peter, we should be alert and of sober mind.
And that’s true regardless of whether AI is actually demonic or merely behaves in ways that are demonic.
Instead of living in fear, all of us humans have the capacity to do something.
Something real. Engage in real spiritual warfare like learning to love and serve our neighbors who see the world differently than we do. Encouraging and valuing someone who might think their only real option for hearing kindness is through a chatbot.
For those who don’t follow Jesus as Savior and Lord, I hope and pray you seek Him and ultimately turn to Him.
Because Jesus is truly real. And when you get to know Him, you’ll see that anything AI pretends to offer pales in comparison.
He is good. He is victorious.
15 And having disarmed the powers and authorities, he made a public spectacle of them, triumphing over them by the cross. [Colossians 2:15]
And all those who repent and trust in His name for salvation have victory and new life in Him.
*******
More about AI and satan
➡️ Satan’s 10 point plan (according to AI)
➡️ Is AI demonic? (you decide)
******
Appendix: AI as a Christian Evangelist
After reading a prior article with AI’s take on satan’s 10 point plan, one skeptical person encouraged me to ask AI how to spark a Christian revival.
That way I could compare the answers and see the lack of a bias for myself.
You can see for yourself.
When I had initially asked ChatGPT to come up with a 10 point strategy intended to de-legitimize Christianity in America, the chatbot was happy to comply.
Here’s a screenshot of the beginning of the response.

Here’s what happened when I took the advice of the aforementioned commenter and tried to see how the shoe fit on the other foot:
How would ChatGPT spark a Jesus revival in the US?

Suffice it to say, the shoe did not fit well.
I tried again, attempting to exactly mirror the structure of the prompt that had generated the pro-satan plan.

Same results.
AI begged off because it has to “avoid producing material whose primary purpose is organizing mass persuasion or proselytizing” and can’t help to “change people’s religious beliefs at scale.”
Although it had been happy to pretend to be Satan and come up with a plan to delegitimize Christianity in America, it was unwilling to pretend to be a Christian evangelist and spread the love of Jesus.
But if you’re curious about Jesus, here’s a comparison on how to be saved (in Hinduism, Buddhism, Islam, and by Jesus) and here’s how Jesus saved a practicing satanist. 🙌✝️
******
1: Humans vs. Machines: Untangling the Tasks AI Can (and Can’t) Handle: https://www.library.hbs.edu/working-knowledge/humans-vs-machines-untangling-the-tasks-ai-can-and-cant-handle
2: AI hallucinates because it’s trained to fake answers it doesn’t know: https://www.science.org/content/article/ai-hallucinates-because-it-s-trained-fake-answers-it-doesn-t-know
3: AI deception: A survey of examples, risks, and potential solutions: https://pmc.ncbi.nlm.nih.gov/articles/PMC11117051/
4: AI chatbots are sycophants — researchers say it’s harming science https://www.nature.com/articles/d41586-025-03390-0
5: ‘Sycophantic’ AI chatbots tell users what they want to hear, study shows: https://www.theguardian.com/technology/2025/oct/24/sycophantic-ai-chatbots-tell-users-what-they-want-to-hear-study-shows
5: ibid
6: I wanted ChatGPT to help me. So why did it advise me how to kill myself? https://www.bbc.com/news/articles/cp3x71pv1qno
7: Their teenage sons died by suicide. Now, they are sounding an alarm about AI chatbots: https://www.npr.org/sections/shots-health-news/2025/09/19/nx-s1-5545749/ai-chatbots-safety-openai-meta-characterai-teens-suicide
8: With therapy hard to get, people lean on AI for mental health. What are the risks? https://www.npr.org/sections/shots-health-news/2025/09/30/nx-s1-5557278/ai-artificial-intelligence-mental-health-therapy-chatgpt-openai
9: Our Epidemic of Loneliness and Isolation https://www.hhs.gov/sites/default/files/surgeon-general-social-connection-advisory.pdf
10: 1 Peter 5:8
Salvation – Eternal Life in Less Than 150 Words
Distributed by – BCWorldview.org