The Most Accurate AI Content Detector
Try Our AI Detector
AI Studies

Is Llama 3.1 Content Detectable?

Meta released Llama 3.1, achieving state-of-the-art performance and becoming the first-ever open-sourced frontier model. In this brief study, we reviewed Llama 3.1 to find out if the Originality.ai AI Detector could detect its content.

Meta released Llama 3.1 alongside its prized 405B model, achieving state-of-the-art performance across key benchmarks and becoming the first-ever open-sourced frontier model, marking a major milestone in open-source AI development.

It’s the first time an open-source AI model matches or outperforms top closed AI models like OpenAI’s GPT-4o. By offering a private, customizable alternative to closed AI systems, Meta is enabling anyone to create their own tailored AI and with this, it’s more necessary than ever to understand the accuracy of AI detectors.  

This brief study looks at 1000 Llama 3.1 generated text results to find out whether the Originality.ai AI Detector can detect Llama 3.1.

TL;DR — Is Llama 3.1 AI-Generated Content Detectable?

  • Yes — Llama 3.1 generated text is detectable with exceptional accuracy across the Originality.ai AI detection models.
  • Results Overview — 3.0.0 Turbo demonstrated 99.6% accuracy, 1.0.0 Lite had 99.1% accuracy, and 2.0.1 Standard had 98.8% accuracy.

Try the Originality.ai AI Detector. Then, learn about AI content detection accuracy and Originality’s exceptional performance in a meta-analysis of third-party studies

Note: Standard is now retired. Get the latest details about model updates in our guide on which AI detector model is best for you!

Dataset

To evaluate the detectability of  Llama 3.1, we prepared a dataset of 1000  Llama 3.1-generated text samples.

The Method: gathering AI-generated text data

For AI-text generation, we used Llama 3.1 based on three approaches:

  1. Rewrite prompts: The first approach generated content by providing the model with a custom prompt and articles (probably generated by LLMs) as a reference to rewrite from (450 Samples).
  2. Rewrite human-written text: The second approach involved generating content with the aim of finding out whether AI rewrites of human-written text could bypass AI detection. The samples for this approach were fetched from an open-source dataset (325 Samples).
    1. One-Class Learning for AI-Generated Essay Detection
      1. Paper: https://www.mdpi.com/2076-3417/13/13/7901
      2. Dataset: https://github.com/rcorizzo/one-class-essay-detection
  3. Write articles from scratch: The third approach, generated articles from scratch based on topics ranging from fiction to nonfiction, such as history, medicine, mental health, content marketing, social media, literature, robots, the future, etc. (225 Samples).

Evaluation

To evaluate the efficacy, we used the Open Source AI Detection Efficacy tool that we released:

A brief overview of the Originality.ai AI detection models

Originality.ai has three models, 3.0.0 Turbo,  2.0.1 Standard, and 1.0.0 Lite, for AI text detection.

  • 3.0.0 Turbo — If your risk tolerance for AI is ZERO! It is designed to identify any use of AI, even light AI.
  • 2.0.1 Standard — A balanced model that is a great option if you are okay with slight use of AI (i.e., AI editing).
  • 1.0.0 Lite — If you permit light AI editing (such as Grammarly’s spelling or grammar suggestions).

For additional information on each of these models, check out our AI detector and read our AI detection accuracy guide.

Evaluating the AI detection models

The open-source testing tool returns a variety of metrics for each detector you test, each of which reports on a different aspect of that detector’s performance, including:

  • Sensitivity (True Positive Rate): The percentage of time the detector identifies AI correctly.
  • Specificity (True Negative Rate): The percentage of time the detector identifies human-written content correctly.
  • Accuracy: The percentage of the detector’s predictions that were correct.
  • F1: The harmonic mean of Specificity and Precision, often used as an agglomerating metric when ranking the performance of multiple detectors.

For a detailed discussion of these metrics, what they mean, how they're calculated, and why we chose them, check out our blog post on AI detector evaluation. For a succinct snapshot, the confusion matrix is an excellent representation of a model's performance.

Below is an evaluation of all these models on the above dataset. 

Confusion Matrix

Figure 1. Confusion Matrix on AI only dataset with Model 1.0.0 Lite
Figure 2. Confusion Matrix on AI only dataset with Model 2.0.1 Standard
Figure 3. Confusion Matrix on AI only dataset with Model 3.0.0 Turbo

Results of the Evaluation :

For this smaller test to identify the ability of Originality.ai’s AI detector to detect Llama 3.1 content, we reviewed the True Positive Rate or the percentage (%) of time that the model correctly identified AI text as AI out of 1000 samples of Llama 3.1 content. 

1.0.0 Lite:

  • Recall (True Positive Rate) = 99.1%

2.0.1 Standard:

  • Recall (True Positive Rate) = 98.8%

3.0.0 Turbo:

  • Recall (True Positive Rate) = 99.6%

Final Thoughts

Overall, Originality.ai continues to demonstrate an outstanding capability to identify AI-generated content, including the latest releases of AI models such as OpenAI’s GPT-4o, Claude 3.5, Gemini 1.5 Pro, and GPT-4o-mini.

Each of Originality.ai’s AI detection models detected Llama 3.1 with exceptional accuracy from 3.0.0 Turbo with 99.6% accuracy to 2.0.1 Standard with 98.8% accuracy, and 1.0.0 Lite with 99.1% accuracy.

Jonathan Gillham

Founder / CEO of Originality.ai I have been involved in the SEO and Content Marketing world for over a decade. My career started with a portfolio of content sites, recently I sold 2 content marketing agencies and I am the Co-Founder of MotionInvest.com, the leading place to buy and sell content websites. Through these experiences I understand what web publishers need when it comes to verifying content is original. I am not For or Against AI content, I think it has a place in everyones content strategy. However, I believe you as the publisher should be the one making the decision on when to use AI content. Our Originality checking tool has been built with serious web publishers in mind!

More From The Blog

Al Content Detector & Plagiarism Checker for Marketers and Writers

Use our leading tools to ensure you can hit publish with integrity!