Dr Patrick Harte, Head Taught Postgraduate Programmes, Edinburgh Napier Business School and Dr Fawad Khaleel, Head of Global Online, Edinburgh Napier Business School

Introduction

Artificial intelligence (AI) continues to reshape the academic landscape, but we must consider being in an evolutionary stage of development rather than the revolutionary disruption experienced since the launch of ChatGPT in November 2020. This evolution and the intersection between AI and academic integrity now needs to be examined for Higher Education (HE) through the three key dimensions of education, prevention and detection. As a sector we need to know how AI can and is influencing, challenging and supporting institutional efforts to promote the foundation of inclusive academic integrity amongst all cohorts.

Academic integrity, as typically defined in the sector, is a function of five components: honesty, trust, fairness, responsibility and respect which are combined to embed positive staff and student behaviour. However, the evolutionary embedding of academic integrity will be founded on overtly integrating ethical good practices when interacting with AI and reducing instances of academic misconduct which are currently prevalent within the sector. In recognition of this current stage of development and the continued need to drive both positive and ‘anti-negative’ messages we will concentrate on the benefits and challenges of integrating AI and academic integrity from the focus dimensions of education, detection and prevention to identify ‘what’s next?’.

Education

AI has the potential to play significant roles in the education of staff, students and institutions, however, there are particular challenges when dealing with staff which need to be addressed, or at least aired. If AI is to be integrated within curricula, and to be utilised as a tool (aka ‘the new Google’) then there is a need for confidence that the elemental algorithms are designed to promote social and societal good and that they communicate positive and respectful values to students. If this essential caveat cannot be addressed by staff, their messages to student cohorts on the use of AI will be negative and perpetuate its exclusion as a legitimate academic tool appropriate for unbiased scholarship.

Where AI may be particularly valuable as a solution to preventing academic misconduct and promoting integrity is within the sphere of personalised learning. However, the use of integrated tuition systems and similar packages is often impractical from both resource and scalability perspectives on, for instance, large undergraduate courses. Therefore, the most immediate and practical solutions lie in staff communicating policies which positively promote AI usage with appropriate acknowledgement. This is particularly important for the international cohorts who when confronted with the digital shock of unfettered AI availability ask, very reasonably, what is acceptable? The answer is educating students on AI’s appropriate ethical use with a very clear and coherent approach towards acknowledgement of its use. This localised education on what is institutionally acceptable and the appropriate protocols for AI acknowledgement should be integrated at the point of assessment as the immediate ‘next’. This can be incorporated in alignment with standardised, packaged, materials which may be useful to establish the basic context of academic integrity through the avoidance of inadvertent academic misconduct. Effective communication here needs to be at all levels and through different voices: consistent messages coming from the institution, the programme and the individual module are important, as is the reinforcing voice coming from student unions.

Prevention

Whilst AI use may be prevalent in all areas of HE, it is in the domain of assessment where its use may be seen as cheating, plagiarism or other forms of gaining the system. There are continued suggestions that academics need to address the use/misuse of AI through improved assessment design, but often this does not come locally or independently. Academics look towards their institutions for guidelines, protocols and templates for the introduction of these AI-resistant assessment practices. AI can be a useful analytic tool for pattern identification from previous assessments to determine which are more or less likely to attract academic misconduct. Thus, it appears that it is incumbent on institutions to provide direction on the authentic, contemporary best practices in assessment design from dual perspectives. Firstly, traditional assessment characteristics should be reviewed with word count and key verbs being revised and tasks made overtly authentic or applied; secondly, when designing an assessment, the brief should be tested where the task is uploaded to one or more generative AI packages and the results shared with the cohort. This demonstration of the content and structure of an AI-generated assessment illustrates academic awareness of outputs which has the potential to lead to investigation. If one in-class task is the critique of the generated text, students’ awareness of its shortcomings could be a valuable exercise in arresting an over-reliance on AI.

Where projects and portfolios are being assessed there is the potential to go beyond the simple written submission to ensure the authenticity and integrity of students’ work. AI is adequately capable of providing coherent and human-imitating narration or enhanced voiceovers for presentations, so simply including such a component of assessment as a safeguard against its use is inadequate. Whilst there will be opportunities to integrate appropriately acknowledged AI use at many developmental stages of the project, submission with a ‘live’ oral presentation is recommended as a critical next step for stake, high credit assessment.

Detection

Detection of academic misconduct, or the explicit communication that institutions have the capacity and capability to detect academic misconduct bridges the dimensions of education and detection. Whilst there are increasingly sophisticated search tools which claim to detect AI use, there remain several limitations to their use in the current ‘arms race’ between generators and detectors. The AI detector still requires ‘training’ on the latest generator outputs, so is necessarily trailing advances in both content and style of outputs. Further, and again an issue for the international cohorts, there can be a bias against the linguistic style of a translated sentence.  As long as these issues prevail and ethical constraints arise in uploading students' work to third-party repositories, these detectors are neither ‘next’ nor practical in the near future. Where linguistic issues or similar qualitative queries/discrepancies arise, then the oral defence or viva is still the most secure method of detecting misconduct. Where more success in technological detection is demonstrably evident in the online proctoring of exams. If summative assessment is to take the exam format and similarity checkers such as Turnitin are not adequate in detecting AI use then rapidly evolving technology and more accessible pricing packages will make online proctoring tools more available. Contemporary packages using, for instance, facial recognition, eye-tracking, audio-monitoring and key-tracking are increasingly seen not as spies but as protectors of integrity with only suspicious behaviour being flagged for investigation. If a summative assessment is necessary, a realistic ‘next’ is the integration of online tools to reduce or eliminate online abuse as further roll-out of these packages is attractive to academic as well as commercial environments.

So, what’s next?

Whilst it cannot be overstated that embedding academic integrity is easier in a longitudinal undergraduate programme than a single-year taught postgraduate programme, there are simple ‘what’s next’ steps to take in relation to AI for all cohorts. First, education must entail a communication programme which promotes the ethical use of AI with clear, coherent and unambiguous use of AI within assessments and this must be repeated exhaustively at the point of assessment! This bridges into the area of prevention where the potential for AI use must be identified by staff for, again, enhanced communication with students. In tandem with this, assessment review is fundamental to the prevention of inappropriate AI usage embedding institutional guidelines on best practices in applied and authentic assessment. Detection remains a thorny issue because of the time lag in technological policing catching the developments in generative AI and where mature online solutions are unavailable, the oral defence has the potential to be incorporated as a standard in new assessment design.

In order to tackle the issue of AI driven breaches of academic integrity, HE institutions need a coordinated approach, where curriculum development, assessment redesign, co-creation of knowledge on integrity and detection policy are aligned with each other. There is a strong relationship between educating students on AI and academic integrity with the prevention measures an institution may use or adopt, as illustrated above. Detection whilst it is an important element university cannot solely rely upon it to halt the breaches of academic integrity.

The HE sector also cannot solve the issue of academic dishonesty by solely throwing money at various detection software and tools. The establishment of micro-linkages, as presented, between education and prevention can provide the necessary scaffolding in establishing a culture of integrity.

References:

A necessary evil? The rise of online exam proctoring in Australian universities - Neil Selwyn, Chris O’Neill, Gavin Smith, Mark Andrejevic, Xin Gu, 2023 (sagepub.com)

AI and HE around the world – Evolution and revolution? (universityworldnews.com)

ChatGPT: More Than a “Weapon of Mass Deception” Ethical Challenges and Responses from the Human-Centered Artificial Intelligence (HCAI) Perspective (tandfonline.com)

Ethical Frameworks in Special Education: A Guide for Researchers - Ravindra Kumar Kushwaha, Kamlesh Yadav, Pradeep Kumar Yadav, Mukesh Kumar Yadav - Google Books

Evaluating the efficacy of AI content detection tools in differentiating between human and AI-generated text | International Journal for Educational Integrity | Full Text (biomedcentral.com)

Free Virtual Work Experience Programs from Top Companies - Forage (theforage.com)

Generative AI tools and assessment: Guidelines of the world's top-ranking universities - ScienceDirect

How do I support my students' academic integrity? (ucd.ie)

International students’ digital experience phase one: a review of policy, academic literature and views from UK higher education - Jisc

International students’ digital experience phase two: experiences and expectations - Jisc

My Academic Integrity – The Academic Integrity Course

Prevention and Detection of Plagiarism in Higher Education: Paper Mills, Online Assessments and AI (economicsnetwork.ac.uk)

Promoting Ethical Uses in Artificial Intelligence Applied to Education | SpringerLink

The impact of high stakes oral performance assessment on students’ approaches to learning: a case study | Educational Studies in Mathematics (springer.com)

The improvement of student learning by linking inclusion/accessibility and academic integrity (qaa.ac.uk)

Unravelling The Problems with AI Detectors/Checkers (idigitalstrategies.com)

Previous
Previous

Why it is Important to Include your own Undergraduate Students in your Postgraduate Recruitment Activity

Next
Next

Value for Money in HE Careers The Student View