Reflections on a whole university approach to AI

The proliferation of AI, particularly generative AI like ChatGPT, has undeniably disrupted many aspects of our lives, and the higher education (HE) sector is no exception. Among HE leaders, academics, and professional service colleagues, there exists a palpable sense of uncertainty, fuelled by fears and anxieties surrounding AI's integration into our institutions.

How should universities navigate this terrain? Where does one even begin to embrace AI? How can universities adopt a holistic, university-wide approach to crafting their AI strategies? These questions were at the forefront of discussion during the "A Whole University Approach to Generative AI 2024" event organized by HE Professional in March 2024, attended by delegates from universities and colleges in the UK and Australia, as well as representatives from QAA, JISC, and the British Computer Society. The event facilitated a robust conversation on the latest methodologies for adopting a university-wide approach to AI, while also showcasing several key projects as part of institutions' journey towards AI maturity. The following figure 1 captures the main elements of the discussion:

A whole university approach to AI

Figure 1: The whole university approach to AI

AI Literacy and Demystification

While the concept of AI has been in existence for over six decades, recent advancements in computational power, algorithm development, and significant commercial investment have led to a paradigm shift in its applications, making it more accessible to everyday users. The introduction of ChatGPT from OpenAI has particularly captured the imagination of many, with some likening generative AI to the inventions of the Internet and calculators. Importantly, proficiency in AI need not be confined to computer programmers alone.

Universities should prioritize educating their executives and leaders on AI and demystifying AI. University leaders should ensure that university colleagues can develop their AI literacy and its applications. Training programs should be tailored to specific roles and individual needs, offering concrete examples and applications to make the learning process relevant and engaging. Moreover, it is imperative that colleagues are aware of the ethical considerations surrounding AI, including issues of privacy, equality, and safety.

Focus on the Education Outcomes with Authentic Assessments

Amidst debates over the regulation of generative AI tools within educational settings, it is crucial for universities to clarify the overarching objectives of their education programs. Institutions must focus their commitment on delivering high quality education and equipping students with the requisite knowledge and skills for their future careers. This necessitates a clear understanding of the program's learning outcomes, informed by subject benchmarks, professional body requirements, and considerations of student readiness for the workforce.

It is important for universities to provide clear policy guidance and support to academic staff on the use of generative AI in their teaching and student assignments. Assessments should be fit for purpose; they are there to evaluate how well students have achieved the education outcomes. Universities should establish clear policy that prevents the use of essay mills and other forms of contract cheating. Students need to be fully briefed on the policy as part of their induction. It is student’s responsibility to explain to tutors how they produce the assignments and what AI tools they have used. The honesty and integrity are essential. Students need to understand all parts of their submitted work.

Authentic assessment practices should be encouraged. Apart from the subject specific skills, university should also assess transferrable skills such as research and learning skills, critical thinking, creative thinking, and problem-solving skills. University colleagues should embrace the use of AI, discuss openly and frankly with students on the appropriate use of AI. There are some excellent assessment guidelines and recommendations provided by QAA. It is also recommended to read a policy white paper edited by Matthew Shardlow and Annabel Latham.

Support Students using AI

For academic colleagues, AI can certainly help in many ways. To name a few, AI can help us to generate curriculum ideas, provide a clone of academics to answer questions from students 24/7, improve accessibility and support for international students. There have been many pilot AI projects which deploy AI to grade student work and provide students with in-depth feedback. The Cogniti project from University of Sydney is a great example. It created agents as an ‘AI double’ of academic themselves to provide additional support 24/27.

The potential benefits of AI for university professional support departments cannot be overstated, particularly in enhancing student support. From automating administrative tasks, AI can replace manual operation and save university resources. However, concerns regarding accountability and accuracy must be addressed, with human oversight remaining integral to the deployment of AI-driven solutions. We do have leaders in this. For instance, University of Glasgow has an automation project to use AI bot to process mitigating circumstances and assignment extension requests from students. Many hours of human time have been saved.

Equality, Fairness, and Ethical Considerations

As universities embrace AI technologies, they must remain vigilant in safeguarding principles of equality, fairness, and ethics. Data privacy concerns necessitate clear agreements on data usage, storage, the purpose of usage, with a focus on data anonymization where possible. Furthermore, AI policies should prioritize transparency and accountability, ensuring that human intervention is always available to rectify errors and safeguard against biases.

The UK has the ambition to lead the world efforts in managing AI advancement safety. Universities already have the ethics approval processes in place to manage the ethics and safety risks of AI research projects. Universities also need to be mindful that AI bots are only as fair as the data sample we feed them, hence the data sample selection is very important to ensure all demographics are represented. Human should take full responsibility for the AI bots they have deployed to support students. In many cases, there is always a low percentage of work that requires human intervention.

Professional Bodies

As AI continues to raise new ethical challenges, professional bodies play a pivotal role in establishing standards and promoting responsible AI practices in professions. Initiatives such as the British Computer Society's Foundation Certificate in the Ethical Build of AI exemplify the industry's commitment to ensuring ethical AI development and management.

Collaboration between academia and professional bodies is essential in fostering a culture of ethical AI usage and upholding standards of professional competence. There are good examples of professional bodies work in this area. The Chartered Institute for Securities & Investment is leading on the Certificate in Ethical Artificial Intelligence for financial professionals. The Chartered Association of Business Schools established a working group on the ethics AI certificate for business and management educators.

Simplicity and Clarity

Universities must strive to make their AI policies and guidelines clear, concise, and user-friendly. The success of AI tools lies not only in their functionality but also in their accessibility to users of varying technical backgrounds. By prioritizing clarity and simplicity, universities can empower their staff and students to harness the full potential of AI while mitigating potential risks.

Final reflections

Universities are facing significant challenges, with regulators expected high quality student outcomes with limited resource. Rapidly evolving AI technology is changing the way we work and learn. It is imperative that universities should approach this AI transformation with careful consideration and foresight with clear and cohesive policy and guidance on the use of AI.

References:

1.     Navigating the complexities of the artificial intelligence era in higher education, QAA, February 2024, https://www.qaa.ac.uk/docs/qaa/news/quality-compass-navigating-the-complexities-of-the-artificial-intelligence-era-in-higher-education.pdf?sfvrsn=8179b281_11, , last accessed on 3 May 2024  

2.     ChatGPT in Computing Education – A Policy White Paper, edited by Matthew Shardlow and Annabel Latham, December 2023, link: https://cphcuk.files.wordpress.com/2023/11/chatgptcomputingeductationwhitepaper.pdf, , last accessed on 3 May 2024  

3.      Letting educators take control of generative AI to improve learning, teaching, and assessment, Danny Liu, 2023 Letting educators take control of generative AI to improve learning, teaching, and assessment – Teaching@Sydney, last accessed on 3 May 2024

Previous
Previous

Developing Impactful Work-Based Learning Programmes: Insights from London Met

Next
Next

Practicing What We Preach: The Challenges of Using Generative AI for Staff and Institutions