Practicing What We Preach: The Challenges of Using Generative AI for Staff and Institutions

The challenges of using generative AI for higher education staff and institutions

Fair Use for Assessment and Analysis of Students

Whilst the recent focus in HE on the rise of AI has tended to be on stopping students cheating using it and working out how to tell them to use it, an equally important aspect is how staff actually use it.

Staff are already talking about how they use platforms to help write grant applications, respond to admissions queries, or even mark students work. Whilst it can be tempting to simply submit a marking scheme into a system, and start submitting student work, this raises all sorts of issues from the legal (data protection and fair use), through to ethical questions about whether students expect this. Similarly, if we push our research ideas for a grant into such a system, how do we know that the intellectual property (IP) is protected, or that we aren’t contravening our own institution’s regulations on sharing that IP.

There are also fairness and bias issues – marking could reflect the training of a particular AI platform in a way that isn’t clear to the people using it. We could be creating something where large sections are verbatim copies from elsewhere.

Professoinal Practice

The British Computer Society (BCS) ethics group looked at this from a development and use perspective in our recent report with guidance for organisations and individuals. This is relevant in educational contexts and especially to university leadership groups, IT teams and the professional and academic staff that are using these systems.

There were several recommendations from this report that are pertinent to Higher Education. One is around the need for organisations – such as universities – to have published policies on the ethical use of AI. There was also a strong view that the UK should be taking a lead here – for general AI, but again we could take a lead on the effective and appropriate use of AI in education.

So, the question for us is how do we ethically use AI as practitioners and institutions, and how do we develop an ethical approach in our students and staff?

Institutional Guidance

HE institutions need to develop clear guidance for staff on what is or isn’t allowed. This needs to be done in an agile way – not lots more forms and processes, but clear succinct policies and guidance on what issues are, and preferred or approved platforms.

As the recent publicity around the Post Office’s Horizon software has highlighted, IT systems being misused or misunderstood can lead to real world impact and harm. This is a very real threat with emerging AI systems, which are less well understood than traditional IT platforms and which can lure users into a false sense of security with their apparent human like responses.

Guidance should clarify the need for transparency to others, whether students, fellow staff, or our clients (be that research journals, funding bodies, or the wider public). Guidance should also clarify where to get assistance when choosing and using AI platforms, and how these fit with approaches to data management, assessment practices and ethical research.

Ethical practice

We need to be clear on our ethical approach and ensuring appropriate use of these platforms and tools. As staff we need to ensure our use of these is clear and transparent, especially if we are using it to develop content, or to analyse student work. One challenge here includes when and how to use AI enabled plagiarism detectors, and appreciating the limitations of them.

Of growing interest – from a practical and enhanced learning perspective, is how to appropriately use AI for marking and feedback. Some people are already using them to review their own feedback, rather than pushing student work into the system itself. This sort of developmental analysis can be instructive to us as practitioners, without entering the challenges of submitting student generated content. Understanding the data (GDPR) and ethical aspects of submitting others’ material – especially students – into AI systems opens up various challenges. There are many guides now to how to use Generative AI in education. The following summarises a few starting points:

Many institutions are producing guidance, though the level of detail and approaches is creating a myriad of issues and differences. The platforms themselves are offering different levels of guidance on use too, with a variety of approaches to storing data, reviewing conversations, and acknowledging the source of the data that was used to train the models. This complexity is likely to increase as we see the systems integrated into other tools such as word processors and mobile devices, so even being aware of the use of AI may become ambiguous.

Problems and Limitations

For now, institutions and users need to be clear of the possible problems of the platforms. Good practice in using them includes understanding the areas where things can go wrong. Some of these include fairness, bias and accuracy.

Fairness includes access, and the potential for a new digital divide, where wealthier individuals (staff and students) can access paywall protected and more adept AI systems.

Bias is something that models may inherently have. There are well known issues with some platforms where the training data was found to be biased to particular characteristics, and this reflects the bias inherent in the internet itself. For further information about this, take a look at these articles from AYA Data and The Guardian.

There are various challenges around accuracy of AI systems. These are improving, but users need to understand that these are different to audited and edited content, they are susceptible to incorrect data from training, to flaws in their own code and implantation, and even potentially to adversarial attacks to manipulate them.

Conclusions

Generative AI offers us an opportunity to redefine education, with new efficiencies and effective practice to make for better experiences and outcomes for students and staff. However, in the same way that we ask students to demonstrate professional practice and to avoid cheating, we need to demonstrate that in our own practice. This means being transparent about our own use of AI – in generating content, in assessment – whether doing assessment, supporting assessment, or checking students work for potential academic misconduct. We need to demonstrate we are “walking the walk” ourselves, as individuals and as institutions.

Finally, having written this article, I thought I’d see what ChatGPT thinks – well generates – based on a suitable prompt. So, asking it “in the style of a Higher Education professional, and considering the challenges of generative AI, summarise in 50 words how a Higher Education Institution, such as a university, should be ethically adopting Generative AI.” gave the answer:

Higher Education Institutions should ethically adopt Generative AI by prioritizing transparency, accountability, and inclusivity. This involves clear communication about AI's limitations, robust oversight mechanisms to ensure fair use, and active engagement with diverse stakeholders to mitigate potential biases and promote responsible innovation in teaching, research, and administration

 This seems to have picked up the main themes I’d identified myself, so perhaps I could have saved myself the time and asked it to generate the article, though that would have been unethical!

Neil Gordon (Twitter/X: @n_a_gordon) is Reader in Computer Science at the University of Hull.

Previous
Previous

Reflections on a Whole University Approach to AI

Next
Next

How to Support Students with Caring Responsibilities: Insights from the ‘Who Cares?’ Project