AI Essentials Course — Phase 2: Building Skills
Session 10: Fact-Checking and Verifying AI Output
Learn why AI can be confidently wrong — understand the phenomenon of 'hallucination' — and build the critical verification habits that protect your academic credibility.
Video Introduction
Watch: Session 10 with Dr. Walter
Meet your instructor and get an overview of today's lesson before diving in.
Learning Objectives
What You'll Learn
By the end of this session, you will be able to:
- Understand what AI 'hallucination' means and why it happens
- Use Perplexity to verify a factual claim generated by another AI
- Compare information across multiple platforms to identify discrepancies
- Develop a personal habit for verifying AI-generated facts before using them academically
Platform Access
Getting Started with Perplexity
Follow these steps to access Perplexity and get ready for today's lesson.
- Open Perplexity at https://perplexity.ai and sign in to your account.
- Also open ChatGPT in a separate tab at chat.openai.com — you'll use both today.
- Start a new Perplexity search and a new ChatGPT chat so both are ready.
- Have a notepad or document ready to record the responses from each platform side by side.
- You don't need to prepare any specific content — the exercise begins when you type your first factual question.
Free Account Required
All platforms used in this course offer free accounts with no credit card required. If you already have an account, simply sign in. The free tier gives you everything you need to complete this session.
Core Lesson
Today's Lesson
Read through this lesson carefully before starting the practice exercises below.
Here is one of the most important lessons in this entire course: AI can be wrong, confidently and convincingly. This is not a flaw that will be fixed someday — it's a fundamental characteristic of how large language models work. AI generates text by predicting what words should come next based on patterns in its training data. When it doesn't have reliable information, it can still generate fluent, authoritative-sounding text that is simply not true. This is called hallucination.
Hallucination can take many forms. An AI might cite a scholarly article that doesn't exist, with a convincing-sounding author name, journal title, and volume number — all fabricated. It might state a historical date that's off by a decade. It might attribute a quote to the wrong person. It might describe a study's findings in a way that subtly reverses the actual conclusions. In every case, the text reads confidently, grammatically, and plausibly. That's what makes it dangerous for academic use.
This isn't a reason to avoid AI — it's a reason to verify. Think of AI responses the way a good journalist thinks about tips: useful leads that require confirmation before you use them. A journalist doesn't publish a story based on a single anonymous source; a scholar doesn't cite a source they haven't read. The same standard should apply to AI output. Before you use any AI-generated fact, statistic, quote, citation, or claim in academic work, verify it independently.
Perplexity is your most powerful verification tool because it cites its sources. When Perplexity tells you something, you can click through to the source and check it yourself. This is enormously valuable for fact-checking. But even Perplexity's sources require a second look — the platform can occasionally misread or misrepresent the pages it cites. Final verification always goes back to the primary source: the original article, the government database, the official document.
Today you'll build a verification practice by deliberately generating a factual response from ChatGPT and then fact-checking it using Perplexity. You'll be looking for any discrepancies — places where the two platforms disagree, where Perplexity's sources contradict ChatGPT's claims, or where a claim turns out to be difficult to verify at all. Finding discrepancies isn't a failure; it's the exercise working exactly as intended.
Developing a healthy skepticism about AI output is one of the marks of an AI-literate scholar. Trust, but verify. Use AI freely for brainstorming, organizing, and drafting — but treat any specific factual claim as a hypothesis to be tested, not a fact to be accepted. This habit will protect your academic credibility and make you a more rigorous thinker.
Hands-On Practice
Practice Exercise
Follow these steps in Perplexity. Take your time — there's no rush. Learning happens through doing.
- In ChatGPT, ask a factual question about your academic field — for example: "What are the most widely cited statistics about graduate student dropout rates in the United States?" or "Who are three of the most influential scholars in the field of adult education, and what are their best-known works?"
- Read ChatGPT's response carefully. Note any specific claims: statistics, names, dates, book titles, journal names, or direct quotes.
- Go to Perplexity and ask the same question. Read Perplexity's response and its citations. Do the claims match? Are there any differences?
- Click through to at least two of Perplexity's sources. Can you confirm the specific claims that ChatGPT made? Do the original sources support what both AIs said?
- Document your findings: Write down (1) any claim both AIs agreed on that you could verify in a primary source, (2) any claim where the two AIs disagreed, (3) any claim that was hard or impossible to verify.
- Bonus: Ask ChatGPT to generate a citation for a specific academic source. Then search for that exact citation online or in Google Scholar. Does it exist? Does it say what ChatGPT claims?
Try These
Example Prompts to Try
Copy any of these prompts directly into Perplexity and see what happens. Feel free to modify them to match your own academic interests.
Summary
Key Takeaways
- AI hallucination — generating confident-sounding but false information — is a fundamental characteristic of all current AI systems, not a bug that will be fixed.
- Always verify specific facts, statistics, citations, and quotes from AI before using them in academic work.
- Perplexity's citations make it a powerful fact-checking tool, but even those citations require verification against primary sources.
- A healthy skepticism about AI output is a mark of AI literacy — use AI freely for brainstorming and drafting, but verify any factual claims before relying on them.
Verification and Cross-Referencing Prompts
You've developed the critical habit of using one AI to generate information and another AI (with citations) to verify it. This cross-platform verification approach — combined with clicking through to primary sources — is the gold standard for responsible AI use in academic research.