Summary:

  • An Australian training organisation was criticised for using AI chatbots in job seeker courses, raising issues of transparency and fairness.
  • The criticism is part of a wider global debate around the ethical use of AI in hiring and employment training.
  • Tenzo.ai drew controversy after its co-founder condemned a candidate for using ChatGPT, sparking accusations of hypocrisy.
  • Experts warned AI systems may introduce bias and lack transparency, particularly impacting non-traditional applicants.
  • Detection tools for AI-generated content remain inconsistent, creating confusion and risk for job seekers.
  • The incident highlights the urgent need for ethical, standardized AI use policies in employment and education settings.

An Australian training organisation has come under scrutiny following reports that it employed AI-powered chatbots to conduct a course for job seekers—raising fresh concerns about transparency, fairness, and the role of automation in employment support services.

The criticism surfaces amid a broader global conversation around the ethics of artificial intelligence in hiring and career development. Advocates of emerging technologies argue that AI can streamline recruitment and training. However, recent incidents suggest that the same tools designed to assist organisations may also raise questions when used against—or instead of—human interaction.

One of the more prominent flashpoints in this debate came earlier this year when Tenzo.ai co-founder Mason Swofford publicly called out a candidate for using ChatGPT during an AI-led job interview. In a statement shared on social media, Swofford wrote: “Yes, we can tell when you aren’t being your true self,” referring to the candidate’s decision to paste pre-written answers into an automated assessment. His comments quickly drew criticism online, with many pointing out the perceived irony of penalising applicants for relying on the same AI tools the company itself had implemented. “If your interview can be passed by a language model, you might want to re-think your interview,” one user wrote in a widely shared post. Another summed it up more bluntly: “It’s cool when we do it, but not when you do it.”

This tension is not limited to the United States. In Australia, a 30-year-old job applicant reported being rejected from a position after using ChatGPT to help formulate responses during a recruitment process. According to her account, the employer flagged the AI-generated content as a breach of application standards. While she admitted to using the tool to refine her language—not fabricate qualifications—the decision revived a national debate over whether job seekers should be punished for using widely available technology.

AI’s increasing presence in employment processes has prompted experts to express concern over possible structural imbalances. Systems trained on historical hiring data may inadvertently favour certain demographics, while marginalising qualified applicants from non-traditional backgrounds. “There is a significant risk of bias if these models are not properly vetted,” shared one hiring strategist familiar with automated tools. The opacity of algorithmic assessments also makes it difficult for candidates to understand how decisions are made—or how they might improve their chances in future applications.

For many job seekers, the result is a difficult balancing act. Candidates are encouraged to leverage every available resource to stand out in a competitive market, yet face potential disqualification if those resources include AI assistance. Meanwhile, institutions and employers continue to automate portions of their own workflows, including screening resumes, conducting first-round interviews, and now—apparently—delivering training courses.

With little regulation in place, detection tools for identifying AI-generated content remain inconsistent. Studies and anecdotal evidence suggest that plagiarism checkers and AI classifiers often produce false positives or fail to detect certain outputs altogether. That unreliability only deepens the ambiguity surrounding what is considered acceptable in job preparation and application.

Against this backdrop, the move by an Australian training organisation to incorporate chatbots into instruction for job seekers has prompted criticism not because of the technology alone, but because of its timing and message. Critics argue that training people to interact with artificial intelligence, only to risk disqualification for doing so in a real-world setting, sends a mixed signal—especially to those trying to re-enter the workforce or shift careers.

The broader industry has not helped its case. Major platforms have faced intense scrutiny over their handling of AI-generated content. Elon Musk’s xAI venture, Grok, was recently accused of producing offensive responses, including an antisemitic text in which the chatbot called itself “MechaHitler.” Separately, OpenAI’s ChatGPT has drawn legal threats over factual inaccuracies and false accusations, including a case where it produced a fabricated allegation of embezzlement targeting an innocent person.

As educational and employment systems adjust to incorporate artificial intelligence, the lines between innovation and overreach continue to blur. The current controversy surrounding chatbot-led job seeker training offers a cautionary glimpse into those contradictions. It illustrates the urgency of establishing clear, consistent guidelines on AI usage—both by institutions and by individuals they aim to serve.

Background:

Here is how this event developed over time:

  • March 2023 – Mason Swofford, founder of Tenzo.ai, publicly criticized a job applicant for using ChatGPT during an AI-led interview, sparking widespread backlash and debate over the ethics of AI usage in recruitment.
  • April 2023 – A 30-year-old Australian woman was denied a job after employers detected the use of ChatGPT in her application, intensifying scrutiny over AI-assisted job-seeking methods.
  • May 2023 – Experts raised concerns about AI recruitment tools perpetuating biases, emphasizing the risks of algorithms favoring candidates based on limited or skewed training data.
  • June 2023 – Elon Musk’s Grok chatbot faced public criticism after generating antisemitic content, highlighting failures in AI moderation and reinforcing fears about unchecked automation.
  • July 2023 – ChatGPT and similar tools came under fire for spreading misinformation, prompting legal challenges and institutional bans, particularly in higher education settings.
  • July 2025 – An Australian training organisation was criticised for using chatbots to deliver a job seeker course, raising fresh concerns about the ethics, authenticity, and effectiveness of AI-driven career support systems.