The AI Illusion: Why ChatGPT and Pseudolaw Trick Us Into Believing Nonsense
Artificial intelligence is transforming the way we produce and consume information. Tools like ChatGPT generate text that sounds human-like, while pseudolegal arguments borrowed from sovereign citizen groups are creeping into courtrooms worldwide. But what if both of these trends have more in common than we realize?
Dr. Joe McIntyreâs research explores a fascinating parallel: Both ChatGPT and pseudolaw rely on form over substance, creating the illusion of meaning while lacking actual depth or validity. This blog post will break down how human psychology makes us vulnerable to these illusionsâand why itâs crucial to develop digital and legal literacy to see through them.
Why Do People Fall for AI and Pseudolaw?
If youâve ever seen a face in a cloud or a smiley face in an electrical outlet, youâve experienced pareidoliaâthe brainâs tendency to find patterns in random input. This pattern-seeking ability is essential for survival but can also deceive us.
When we read ChatGPTâs responses or listen to a confident pseudolaw guru citing legal nonsense, we think weâre seeing expertise and meaningful information. In reality, weâre witnessing pattern recognition gone wrongâwhat McIntyre calls conceptual pareidolia. We mistake form (legalese or well-written text) for actual substance (valid arguments or true facts).
ChatGPT: A Confidence Trick on a Global Scale
How Large Language Models Mimic Human Speech
At its core, ChatGPT is not designed to understand meaning. It predicts the next word in a sentence based on massive amounts of training data, creating statistically likely responses rather than fact-checked information.
Imagine an AI system trained to predict movie dialogue. It might generate a convincing Star Wars script by recognizing common phrases, but it does not understand the plot, emotions, or themes of the movies. Itâs just playing an advanced game of Mad Libs with probabilities.
Why We Trust ChatGPT Even When It's Wrong
Research shows that confidence influences credibilityâthe so-called confidence heuristic. We tend to trust information that is presented without hesitation.
LLMs take advantage of this bias by generating fluent and authoritative text. Unlike a human who might hedge with âI thinkâ or âthis might be wrong,â ChatGPT will deliver its responses with complete confidenceâeven when theyâre incorrect.
As a result, naĂŻve users can mistake hallucinated content for facts, leading to misinformation in journalism, education, and even the legal system.
Pseudolaw: When Legal Gobbledygook Sounds Convincing
What Is Pseudolaw?
Pseudolaw operates much like ChatGPT but in the courtroom. It consists of false legal arguments that sound sophisticated but have no actual legal basis. Sovereign citizens and other pseudolegal theorists claim that:
- Governments donât have legitimate authority over them.
- They have a secret second identity (a âstrawmanâ) that debts and taxes apply to.
- They can avoid legal obligations simply by using the right legal-sounding jargon.
The Pseudolaw Playbook: Legalese Without the Law
Pseudolaw flourishes because legal language is inherently complex. Much like AI-generated text, pseudolaw mimics legal forms and terminology but lacks actual legal reasoning.
For example, sovereign citizens often file affidavits containing:
â
Formal legal language ("I declare under penalty of perjuryâŚ")
â
Fancy formatting that looks official
â
Outdated legal citations that sound authoritative
This ritualistic use of legal jargon tricks both practitioners and victims into believing the arguments hold legal weightâwhen they donât.
Parallel Psychological Traps: Why We Fall for It
1. Conceptual Pareidolia: The Brain Sees Meaning Where There Is None
Both ChatGPT and Pseudolaw exploit our brainâs pattern-seeking system.
- ChatGPT users trust outputs because they look like well-written sentences.
- Pseudolaw adherents trust their arguments because they sound like legal reasoning.
In both cases, the output may be completely disconnected from reality, but our brains instinctively associate familiar patterns with truth.
2. The Confidence Heuristic: Mistaking Confidence for Competence
- ChatGPT writes with absolute assuranceâso readers assume it knows what itâs talking about.
- Sovereign citizens perform legal rituals with great convictionâso followers assume they are correct.
When information is presented in an authoritative way, we are less likely to question it, even when we should.
3. Magical Thinking: The Promise of a Secret Shortcut
Both AI and sovereign citizens sell the dream of hidden knowledge:
đ¤ ChatGPT: âYou donât need to researchâjust ask me anything, and Iâll tell you!â
đ§ Pseudolaw: âThe government is hiding the real law, but I can teach you the secret to beating the system!â
This taps into psychological tendencies toward wishful thinking and conspiracy belief, where people long for a hidden truth that âexpertsâ donât want them to know.
The Real Danger: When Form-Over-Substance Has Consequences
1. AI-Powered Legal Disasters
ChatGPT and similar tools have already been misused in legal cases:
- Lawyers in New York and Australia submitted legal briefs with fabricated cases generated by ChatGPT.
- Judges in the Netherlands and the U.S. cited AI outputs in court, not realizing they contained hallucinated legal principles.
These incidents show how easy it is to be fooled by AI that looks smart but cannot verify truth.
2. Courts Are Struggling to Handle Pseudolaw Cases
Judges are overwhelmed by sovereign citizens filing meaningless lawsuits, clogging the legal system with fictional claims. One $50 parking ticket can escalate into $2000 in court fees because of time wasted on pseudolegal arguments.
How to Fight Back: Legal & AI Literacy
Both pseudolaw and AI-generated content succeed when users donât have the knowledge to distinguish appearance from truth. The solution? Better digital and legal education.
â For AI Users: Critical thinking skills must keep pace with AI advancements. Donât assume ChatGPT is correctâask for sources, verify facts, and trust human expertise in critical fields.
â For Legal Consumers: Plain-English legal education should be emphasized in schools. Understanding how law actually works is the best defense against pseudolaw scams and false claims.
Key Takeaways
đ ChatGPT and pseudolaw both create the illusion of knowledge. Their outputs look trustworthy, even when theyâre nonsense.
đ§ Human psychology makes us vulnerable to these illusions. Our brains mistake familiar patterns and confidence for real expertise.
âď¸ AI and pseudolaw are causing real-world damage. Misinformation, legal trouble, and clogged courts are just the beginning.
đ˘ Legal and AI literacy are the best defenses. We must teach critical thinking and verification skills in an era of algorithmic and legal deception.
Final Thought
The next time you read an AI-generated response or hear someone argue they donât have to pay taxes because they didnât consent to the law, ask yourself:
Am I looking at knowledgeâor just a really good illusion?