The AI Illusion: Why ChatGPT and Pseudolaw Trick Us Into Believing Nonsense

The AI Illusion: Why ChatGPT and Pseudolaw Trick Us Into Believing Nonsense

Artificial intelligence is transforming the way we produce and consume information. Tools like ChatGPT generate text that sounds human-like, while pseudolegal arguments borrowed from sovereign citizen groups are creeping into courtrooms worldwide. But what if both of these trends have more in common than we realize?

Dr. Joe McIntyre’s research explores a fascinating parallel: Both ChatGPT and pseudolaw rely on form over substance, creating the illusion of meaning while lacking actual depth or validity. This blog post will break down how human psychology makes us vulnerable to these illusions—and why it’s crucial to develop digital and legal literacy to see through them.


Why Do People Fall for AI and Pseudolaw?

If you’ve ever seen a face in a cloud or a smiley face in an electrical outlet, you’ve experienced pareidolia—the brain’s tendency to find patterns in random input. This pattern-seeking ability is essential for survival but can also deceive us.

When we read ChatGPT’s responses or listen to a confident pseudolaw guru citing legal nonsense, we think we’re seeing expertise and meaningful information. In reality, we’re witnessing pattern recognition gone wrong—what McIntyre calls conceptual pareidolia. We mistake form (legalese or well-written text) for actual substance (valid arguments or true facts).


ChatGPT: A Confidence Trick on a Global Scale

How Large Language Models Mimic Human Speech

At its core, ChatGPT is not designed to understand meaning. It predicts the next word in a sentence based on massive amounts of training data, creating statistically likely responses rather than fact-checked information.

Imagine an AI system trained to predict movie dialogue. It might generate a convincing Star Wars script by recognizing common phrases, but it does not understand the plot, emotions, or themes of the movies. It’s just playing an advanced game of Mad Libs with probabilities.

Why We Trust ChatGPT Even When It's Wrong

Research shows that confidence influences credibility—the so-called confidence heuristic. We tend to trust information that is presented without hesitation.

LLMs take advantage of this bias by generating fluent and authoritative text. Unlike a human who might hedge with “I think” or “this might be wrong,” ChatGPT will deliver its responses with complete confidence—even when they’re incorrect.

As a result, naĂŻve users can mistake hallucinated content for facts, leading to misinformation in journalism, education, and even the legal system.


What Is Pseudolaw?

Pseudolaw operates much like ChatGPT but in the courtroom. It consists of false legal arguments that sound sophisticated but have no actual legal basis. Sovereign citizens and other pseudolegal theorists claim that:

  • Governments don’t have legitimate authority over them.
  • They have a secret second identity (a “strawman”) that debts and taxes apply to.
  • They can avoid legal obligations simply by using the right legal-sounding jargon.

The Pseudolaw Playbook: Legalese Without the Law

Pseudolaw flourishes because legal language is inherently complex. Much like AI-generated text, pseudolaw mimics legal forms and terminology but lacks actual legal reasoning.

For example, sovereign citizens often file affidavits containing:
✅ Formal legal language ("I declare under penalty of perjury…")
✅ Fancy formatting that looks official
✅ Outdated legal citations that sound authoritative

This ritualistic use of legal jargon tricks both practitioners and victims into believing the arguments hold legal weight—when they don’t.


Parallel Psychological Traps: Why We Fall for It

1. Conceptual Pareidolia: The Brain Sees Meaning Where There Is None

Both ChatGPT and Pseudolaw exploit our brain’s pattern-seeking system.

  • ChatGPT users trust outputs because they look like well-written sentences.
  • Pseudolaw adherents trust their arguments because they sound like legal reasoning.

In both cases, the output may be completely disconnected from reality, but our brains instinctively associate familiar patterns with truth.

2. The Confidence Heuristic: Mistaking Confidence for Competence

  • ChatGPT writes with absolute assurance—so readers assume it knows what it’s talking about.
  • Sovereign citizens perform legal rituals with great conviction—so followers assume they are correct.

When information is presented in an authoritative way, we are less likely to question it, even when we should.

3. Magical Thinking: The Promise of a Secret Shortcut

Both AI and sovereign citizens sell the dream of hidden knowledge:

🤖 ChatGPT: “You don’t need to research—just ask me anything, and I’ll tell you!”
🧙 Pseudolaw: “The government is hiding the real law, but I can teach you the secret to beating the system!”

This taps into psychological tendencies toward wishful thinking and conspiracy belief, where people long for a hidden truth that “experts” don’t want them to know.


The Real Danger: When Form-Over-Substance Has Consequences

ChatGPT and similar tools have already been misused in legal cases:

  • Lawyers in New York and Australia submitted legal briefs with fabricated cases generated by ChatGPT.
  • Judges in the Netherlands and the U.S. cited AI outputs in court, not realizing they contained hallucinated legal principles.

These incidents show how easy it is to be fooled by AI that looks smart but cannot verify truth.

2. Courts Are Struggling to Handle Pseudolaw Cases

Judges are overwhelmed by sovereign citizens filing meaningless lawsuits, clogging the legal system with fictional claims. One $50 parking ticket can escalate into $2000 in court fees because of time wasted on pseudolegal arguments.


Both pseudolaw and AI-generated content succeed when users don’t have the knowledge to distinguish appearance from truth. The solution? Better digital and legal education.

✅ For AI Users: Critical thinking skills must keep pace with AI advancements. Don’t assume ChatGPT is correct—ask for sources, verify facts, and trust human expertise in critical fields.

✅ For Legal Consumers: Plain-English legal education should be emphasized in schools. Understanding how law actually works is the best defense against pseudolaw scams and false claims.


Key Takeaways

🔎 ChatGPT and pseudolaw both create the illusion of knowledge. Their outputs look trustworthy, even when they’re nonsense.

🧠 Human psychology makes us vulnerable to these illusions. Our brains mistake familiar patterns and confidence for real expertise.

⚖️ AI and pseudolaw are causing real-world damage. Misinformation, legal trouble, and clogged courts are just the beginning.

📢 Legal and AI literacy are the best defenses. We must teach critical thinking and verification skills in an era of algorithmic and legal deception.


Final Thought

The next time you read an AI-generated response or hear someone argue they don’t have to pay taxes because they didn’t consent to the law, ask yourself:
Am I looking at knowledge—or just a really good illusion?

Stephen, Founder of The Prompt Index

About the Author

Stephen is the founder of The Prompt Index, the #1 AI resource platform. With a background in sales, data analysis, and artificial intelligence, Stephen has successfully leveraged AI to build a free platform that helps others integrate artificial intelligence into their lives.