Integrating AI into Counselling Assessments with Integrity

In the therapeutic world, we value congruence, transparency, and evolving with the client. As educators, our response to Generative AI should mirror these values. The “AI-free” classroom is increasingly unrealistic; instead, we have a unique opportunity to mentor students in using Large Language Models (LLMs) as professional tools.

If our students are going to use AI—and the data suggests they are—our role is to ensure they do so with the clinical rigour and ethical backbone the profession demands. Here is how we can shift from “policing” to “professionalising” AI in counselling assessments.

1. AI as the ‘Scaffolding,’ Not the Structure

We can explicitly allow AI in the preparatory phases of an assignment. This mirrors how a practitioner might use digital tools for brainstorming or organising initial clinical thoughts.

  • Brainstorming & Outlining: Encourage students to use AI to generate case study themes or to structure a literature review.
  • The “Prompt Appendix”: Make it a requirement that students submit a “Prompt Log” as an appendix. This documents their conversation with the AI, showing how they refined their queries and navigated the tool’s limitations.

2. The Mandate of the “Human-in-the-Loop”

In counselling, a factual error isn’t just a typo; it’s a clinical risk. Assessments should require students to treat AI output as unverified testimony.

  • Mandatory Fact-Checking: If a student uses an AI-generated summary of a theory (like Yalom’s therapeutic factors), they must provide the original source citation alongside it and ensure it accurately represents the source. Students should read the relevant parts of the source to understand it, as part of this fact-checking.
  • The Comparison Task: Ask students to generate a 500-word summary of a modality using AI, and then write a 500-word critique identifying what the AI missed, oversimplified, or hallucinated.

3. Grading the “Ghost in the Machine”

This is where the rubber meets the road: the marking criteria. We need to move beyond “content accuracy” and start grading author credibility and source integrity.

Sample Marking Rubric: AI Integrity & Credibility

CriteriaDeveloping (Pass/Fail Boundary)Proficient (High Marks)
Source IntegrityRelies on AI-generated citations without verification; includes partial or full “ghost” (hallucinated) references.All citations are verified against primary peer-reviewed sources; zero hallucinations present.
Critical SynthesisContent feels “generic” or repetitive; lacks a distinct “student’s voice” or clinical nuance.AI is used for structure, but the final analysis shows deep, human-led synthesis and application.
Ethical DisclosureAI use is hidden, poorly documented, or fails to include a Prompt Log where required.Clear transparency regarding AI-assisted sections and a detailed reflection on how output was verified.
Professional CredibilityMajor Deduction: Presence of hallucinations or uncorrected AI errors indicates a failure of due diligence.Demonstrates “professional skepticism” by identifying and corrected AI biases or errors. Very high accuracy of content.

4. Ethical Use as a Clinical Skill

Ethical AI use is a direct extension of counselling ethics. We should teach students that using AI without disclosure is a breach of integrity—a core value in professional codes such as the ACA or PACFA.

  • Confidentiality First: Educators must emphasise that inputting real client data or sensitive placement details into a public AI is a major ethical violation and a breach of privacy laws.
  • Bias Awareness: Assessments should require students to reflect on the inherent biases of AI (e.g., Western-centric perspectives) and how that impacts diverse client populations in a therapeutic context.

Final Thoughts: The Credibility Penalty

When we grade a student down for a hallucinated reference, we aren’t just marking a paper; we are teaching them that in the therapy room, truth and accountability matter. By building AI into our assessments, we aren’t “giving in.” We are training the next generation of practitioners to be discerning consumers of technology who prioritise the safety and dignity of their future clients over the convenience of a keyboard.

Discussion Question: How are you currently adapting your rubrics for the AI era? Are you seeing more “ghost” citations, or are your students stepping up to the challenge of transparency?

Authorship Statement

This article was co-created by Nathan Beel and Google Gemini Pro 2026. The core concepts, structure, and editorial direction were provided by Nathan Beel, with Google Gemini Pro assisting in the drafting and formatting of the content at his prompting.

Leave a Reply

Your email address will not be published. Required fields are marked *