Why isGPT Outperforms Turnitin in Detecting AI-Generated Academic Vocabulary
The proliferation of AI paraphrasing tools—such as HumBot, Rewritify AI, WriteHuman, and Humanify—has raised concerns over the authenticity of academic submissions. Students and professionals alike increasingly turn to platforms like LunchBreak AI and Sonic Humanization to make their content appear “human-written.” This raises an important question: How reliable are current AI content detectors in identifying subtle manipulations of academic vocabulary?
Experimental Setup
We tested the output of eight popular humanization and paraphrasing tools:
- zerogpt paraphrase
- humbot / humbot.ai
- lunchbreak ai
- writehuman
- humanify
- rewritify / rewritify ai
- sonic humanization
Each tool was prompted to paraphrase both technical and academic content (e.g., abstracts, literature reviews, and theoretical discussions). The modified content was then analyzed using two detection platforms: Turnitin AI Detector and isGPT.
Sample Content Types
Content Type | Examples Included |
---|---|
Academic Writing | Research abstracts, review articles, SOPs |
Technical Writing | Algorithm documentation, product descriptions |
Detection Results
Tool | Pass Rate on Turnitin (Academic) | Pass Rate on isGPT (Academic) | Pass Rate on isGPT (Technical) |
---|---|---|---|
zerogpt paraphrase | 70% | 22% | 88% |
humbot.ai | 65% | 18% | 85% |
writehuman | 72% | 25% | 91% |
rewritify ai | 68% | 21% | 90% |
sonic humanization | 74% | 29% | 87% |
Conclusion: While Turnitin was relatively lenient with AI-rewritten academic vocabulary, isGPT consistently flagged AI-generated phrasing, even after aggressive humanization. However, technical content (e.g., software documentation) passed isGPT checks more easily, suggesting that isGPT focuses more stringently on patterns typical of academic prose.
Common Academic Vocabulary Manipulations Detected by isGPT
Here are some subtle but frequently flagged manipulations of academic vocabulary:
Original Phrase | AI-Humanized Version | isGPT Detection |
---|---|---|
"This paper examines..." | "This study takes a look at..." | ⚠️ Flagged |
"Significant contribution" | "Important input" | ⚠️ Flagged |
"Methodology employed" | "Used method" | ⚠️ Flagged |
"Empirical evidence shows" | "Real-world data proves" | ⚠️ Flagged |
"The findings suggest..." | "The results hint that..." | ⚠️ Flagged |
These examples indicate that AI tools often downgrade precise academic phrases to casual equivalents, which may bypass human eyes but are picked up by robust detectors like isGPT.