Lesson 1.4 - Hallucinations, Bias, and False Confidence
Learning objective
Understand AI risks that directly affect testing quality.
Chapter 4 / Objective 4
One of the most dangerous AI behaviors is hallucination.
Hallucination means:
- AI confidently gives incorrect or made-up information
For testers, this can look like:
- Invented test steps
- Incorrect API behavior
- Made-up error messages
- Non-existent configuration options
AI is also biased toward:
- Common use cases
- Happy paths
- Popular frameworks
That's a problem-because testers focus on what goes wrong.
How testers should respond
- Ask AI why it suggests something
- Request assumptions explicitly
- Cross-check against requirements
Example follow-up prompt:
"What assumptions are you making about this system?"
Key takeaway
Confidence is not correctness. Always challenge AI output.