Coauthored by Nathan Beel and Gemini Pro 2026.
[Process transparency: I provided the ideas and edited the draft, and AI generated most of the text].
In the rapidly evolving landscape of Australian higher education, Generative AI (GenAI) has presented a unique challenge to academic integrity. However, for those of us in the counselling profession—where “who we are” is as critical as “what we know”—the stakes are significantly higher.
When a counselling student uses AI to generate an assignment and subsequently lies about it, we are no longer just dealing with a case of academic misconduct. We are witnessing a breakdown in professional dispositions: the ethical judgement, honesty, and transparency required to practise safely. This is where academic integrity meets professional gatekeeping.
The Double Breach: Deception as a Dispositional Issue
Gatekeeping in counselling education is the process of evaluating a student’s suitability for the profession. Under the PACFA Accreditation Standards, this process begins at admission and continues through to graduation. It encompasses more than just marks; it includes the assessment of “fitness to practise.”
When a student submits AI-generated work as their own, they commit a first act of deception. When confronted and they choose to lie, they commit a second act of deception. While current standards are often viewed through the lens of clinical competency, these academic deceptions are “red flags” for future professional impairment that may not be immediately visible in a therapy room.
- Integrity vs. Efficiency: A counsellor who prioritises “getting it done” over “doing the work” may mirror this behaviour in clinical notes or client reports later in their career.
- The Therapeutic Alliance: Therapy is built on a foundation of radical honesty. If a trainee cannot be honest with their educators, how can we trust their transparency with clients or supervisors?
Guidance for Academics: Navigating Interpersonal Dishonesty
For academics, the shift from a student’s academic dishonesty (the AI use) to their interpersonal dishonesty (the lie) is often the most distressing part of the process. It requires a move from “marker” to “gatekeeper.”
- Naming the Behaviour: Do not let the interpersonal lie slide in favour of focusing on the software. Explicitly name the behaviour in your documentation: “The student denied the use of AI despite contradictory evidence, reflecting a lack of transparency and an inability to take professional accountability.”
- The “Clinical Parallel” Technique: In the meeting, draw a parallel to the profession. Ask: “In our code of ethics, transparency is foundational. If you cannot be transparent about your process here, how do you envision maintaining ethical integrity when faced with a difficult clinical error in the future?”
- Documenting the ‘Affect’ and Response: It is vital to document the student’s reaction to being caught. Are they defensive? Do they double down? A student who can admit a mistake demonstrates a capacity for growth; a student who maintains a lie under pressure is demonstrating a significant dispositional risk.
- Separating Skill from Suitability: Remind yourself that a student may be “competent” in a roleplay while simultaneously being “unfit” for the profession due to deceptive practices. Your duty is to the latter.
Guidance for Counselling Educators: Handling the “Discovery”
Educators are the first line of defence. However, because AI detection is notoriously unreliable, the “confrontation” must be handled as a clinical and developmental opportunity rather than just a disciplinary one.
- Lead with Curiosity, Not Accusation: Use an “open dialogue” approach. Instead of “I know you used AI,” try “The tone and depth of this paper differ significantly from your previous work and our tutorials. Can you walk me through your writing process and how you sourced your ideas?”
- The “Lying” Pivot: If a student continues to lie despite evidence (e.g., hallucinated citations or stylistic shifts), the conversation must shift from the assignment to the professional disposition.
- Remediation as a Pathway: PACFA standards mandate that institutions identify academic needs and provide remediation. However, for deceptive practices, remediation must go beyond “learning how to cite” and move into ethical decision-making and professional identity formation.
Guidance for Institutions: Policies with “Teeth”
Standard university “Academic Honesty Policies” often lack the nuance required for professional accredited programmes. Counselling departments must leverage progression and hurdle assessments to ensure integrity.
- Hurdle Assessments for Ethics: Deceptive use of AI should be factored into hurdle assessments. If a student cannot demonstrate honesty in their academic work, they have not met the “inherent requirements” to progress to Work Integrated Learning (WIL).
- Monitoring Fitness to Practise: Institutions should establish that “fitness” includes academic integrity. A student who lies about AI use is demonstrating a conduct issue that potentially compromises the safety of the public.
- Define “Authorised Use”: Clearly state in every unit outline what constitutes “support” versus “replacement”.
A Call to Accrediting and Peak Bodies
Accrediting bodies like PACFA and the ACA have a responsibility to lead the conversation on how digital deception maps onto professional suitability.
- Expanding the Definition of Competency: We must move beyond seeing competency only as a clinical skill. The ability to be honest when under pressure is a fundamental professional competency.
- Standardised Reporting: We need clear standards on how AI-related misconduct and subsequent deception should be reported during membership applications or clinical internship placements.
- Protecting the Public: PACFA’s Code of Ethics mandates protecting the public from inappropriate practice. Deceptive behaviour in training is a precursor to inappropriate practice in the field.
Final Thoughts: Protecting the Profession
Gatekeeping is a heavy burden for educators. It is emotionally taxing and legally complex. But “gateslipping”—allowing a student to graduate who has demonstrated a pattern of deception—is a far greater risk to the public and the reputation of the profession.
By addressing AI use through the lens of professional gatekeeping, we move the conversation from “how to catch a cheat” to “how to cultivate an ethical practitioner.” Our clients deserve professionals who are as authentic as the healing we aim to provide.
