The integration of Artificial Intelligence (AI) into higher education necessitates a proactive approach to developing staff competencies. For university educators, the imperative extends beyond the ad hoc utilisation of AI tools; it also encompasses a critical, ethical, and creative application in classrooms and other learning environments. The objective of lteracy programs is to unlock the innovative potential within education and educators, moving beyond the simplistic narratives of technological determinism or the imposition of rigid, inflexible prescriptions.
This blog post describes the emerging landscape of AI literacy programmes within universities. It advocates for a shift away from top-down, expert-driven directives towards a more organic, principle-guided, and creativity-activating approach. The goal is to empower academic staff to navigate the complexities of AI, harness its potential responsibly, and ultimately, enhance the educational experience for all. AI is a broad-based technology, impacting all fields and industries thus a expert-driven approach is not feasible in such a broad landscape.
AI literacy in university education
Before we can build effective programmes, we need a robust understanding of what AI literacy entails for university educators. It’s far more than technical proficiency in coding or mastery of a specific AI application. It’s a multifaceted competency that enables educators to engage thoughtfully and meaningfully with AI technologies, preserving and enhancing the value of a university education.
Frameworks, such as those proposed by Ng et al. (2021) and UNESCO (2024), provide valuable starting points. Ng et al. identify key dimensions including the ability to know and understand AI, use it effectively, critically evaluate its outputs and implications, and navigate the ethical issues it raises. UNESCO’s AI Competency Framework for Teachers emphasises a human-centred mindset, the ethics of AI, understanding AI techniques and applications, and even aspects of AI system design relevant to educational contexts. For educators, these dimensions translate into the capacity to:
- Comprehend core AI concepts: Grasping the fundamentals of how AI, particularly generative AI and large language models, functions, without necessarily needing to be a data scientist.
- Critically evaluate AI tools: Assessing the suitability, limitations, biases, and ethical implications of various AI tools for specific pedagogical purposes. Resources like the Secure AI Framework (SCoRE) can be invaluable in this context, offering guidance for lecturers to make informed choices about the tools they integrate.
- Integrate AI into pedagogy: Reimagining teaching strategies, learning activities, and assessment methods to leverage AI’s strengths while mitigating its risks.
- Foster ethical AI use: Guiding students in the responsible use of AI tools and promoting academic integrity in an AI-suffused environment.
Moreover, as Dal Ponte et al. (2025) suggest in their work on a GenAI Literacy and Fluency Framework, there’s a progression from basic AI literacy (understanding capabilities and ethics) to AI fluency (harnessing full capabilities and integrating new approaches with GenAI).
Key pillars for an AI literacy programme
To build a truly effective AI literacy programme that resonates with academic staff and fosters lasting change, we need to ground it in solid principles. These principles or ‘pillars’, synthesised from existing frameworks, local policies, and the evolving needs of university educators, should focus on activating creativity and critical thinking within their educational context, rather than simply disseminating information.
Pillar 1: Ethical and responsible AI use as the North Star. Ethics cannot be an afterthought; it must be the bedrock of any AI literacy initiative, a sentiment echoed across recent frameworks (Ng et al., 2021; UNESCO, 2024). This involves fostering a deep understanding of AI ethics specific to education, aligning AI use with institutional values, and empowering staff to make informed, responsible choices regarding AI tools (Dal Ponte et al., 2025). This includes:
- Deep understanding of AI ethics in education: Exploring issues of bias in algorithms and data, student privacy concerns, data security, transparency in AI decision-making, and intellectual property.
- Alignment with institutional values and policies: Ensuring that the adoption and use of AI in teaching and learning are congruent with the university’s mission and ethical guidelines.
- Empowering informed choices: Equipping staff to use frameworks (like the aforementioned SCoRE framework) to select and deploy AI tools responsibly.
Pillar 2: Critical evaluation and purposeful pedagogical integration. The adoption of AI must be driven by pedagogical purpose, requiring educators to critically evaluate tools (Ng et al., 2021) and understand their appropriate application in educational contexts (UNESCO, 2024; Dal Ponte et al., 2025). This pillar focuses on:
- Developing critical assessment skills: Enabling staff to analyse AI tools for their genuine educational value, considering learning outcomes, student engagement, and accessibility.
- Understanding affordances and limitations: Recognising where AI can genuinely enhance teaching (e.g., personalised feedback, automating mundane tasks, generating diverse learning materials) and where human interaction, critical thinking, and nuanced judgement remain irreplaceable.
- Student-centred application: Prioritising how AI can support diverse learners and create more inclusive learning environments.
Pillar 3: Igniting creative exploration and pedagogical innovation. AI literacy should empower educators to become innovators, moving beyond consumption to co-creation and fostering an experimental mindset (Dal Ponte et al., 2025; UNESCO, 2024). This pillar is about:
- Cultivating an experimental mindset: Encouraging staff to ask “What if?” and to view AI as a set of tools for creative problem-solving in their teaching practice.
- Moving from consumption to co-creation: Supporting staff in adapting AI tools, designing novel learning experiences leveraging AI, or even contributing to the development of education-specific AI applications.
- Fostering interdisciplinary collaboration: Creating spaces where staff from different disciplines can share insights and co-develop innovative AI-enhanced teaching approaches.
Pillar 4: Demystifying AI: building foundational understanding. While deep technical expertise may not be the primary goal for all educators, a foundational understanding of AI concepts is essential for confident and critical engagement. This directly relates to the “know and understand” AI dimension from Ng et al. (2021) and the need to “comprehend core AI concepts” as outlined by UNESCO (2024). The work by Dal Ponte et al. (2025) also begins with foundational knowledge in its progression towards fluency. This involves:
- Accessible explanations: Clearly explaining core concepts like machine learning, neural networks, large language models, and generative AI in a way that is understandable to a non-technical audience.
- The role of data: Highlighting the critical importance of data in training AI models, the implications of data quality and bias, and how data is used in educational AI tools.
- Understanding “under the hood”: Providing enough insight into how AI tools generate responses or make predictions to enable critical evaluation of their outputs.
Visualising AI’s role across cognitive skills: the Bloom-AI framework
To further aid educators in conceptualising how AI can be integrated across different levels of cognitive engagement, frameworks that map AI tools and approaches to established pedagogical models like Bloom’s Revised Taxonomy can be exceptionally useful. These visualisations help to delineate where AI can support foundational knowledge acquisition and where human-led activities are crucial for developing higher-order thinking skills.
Consider, for example, a framework that aligns Bloom’s levels (Remember, Understand, Apply, Analyse, Evaluate, Create) with specific AI-supported activities, human-led pedagogical strategies, and examples of relevant AI tools. Such a model provides a practical lens through which educators can design learning experiences that strategically leverage AI.

By referencing such frameworks, educators can design activities more intentionally. For instance, AI tools might assist with tasks at the ‘Remember’ and ‘Understand’ levels, freeing up educators to focus on facilitating ‘Analyse’, ‘Evaluate’, and ‘Create’ activities that require deeper human critical thinking, debate, and nuanced project design. This strategic allocation of tasks ensures that AI augments, rather than replaces, the vital role of the human educator in fostering complex cognitive skills.
A principle-driven programme: activating staff creativity
The most impactful AI literacy programmes will be those that are built with and for academic staff, fostering a sense of ownership and collective exploration. The guiding principle should be to activate creativity and critical inquiry, rather than prescribing solutions or relying solely on expert opinion. This approach respects the diverse expertise and pedagogical creativity already present within the university.
Here’s a suggested programme of activities, designed to be iterative, collaborative, and adaptable:
1. Foundational workshops & interactive learning (laying the groundwork):
- “AI demystified for educators”: A series of short, engaging workshops covering essential AI concepts, ethical considerations (drawing on university policies and principles), practical prompting skills, and an overview of the AI landscape in education.
- “Navigating the AI toolkit – responsibly”: Hands-on sessions where staff explore a curated set of AI tools relevant to teaching and learning. Tool selection should be guided by ethical evaluations (e.g., using
secureframework.ai
) and pedagogical relevance. These are not just “how-to” sessions, but opportunities for critical discussion about when and why to use specific tools. - “Ethical AI in our practice”: Facilitated discussions and case study analyses focused on applying ethical principles to real-world teaching scenarios involving AI.
2. Creative, collaborative, and R&D-focused activities (fostering innovation):
- “Assessment reimagined: AI hack-a-thons”: Staff from various disciplines collaborate to brainstorm, design, and prototype new assessment methods that meaningfully incorporate AI, either as a tool for students or as a means for educators to assess learning. This could include developing strategies for assessing AI-generated work or creating AI-assisted feedback mechanisms, all while upholding academic integrity.
- “Pedagogical innovation labs with AI”: Themed workshops or project groups where staff tackle specific teaching and learning challenges (e.g., enhancing student engagement in large lectures, providing timely and personalised feedback, fostering critical thinking in an AI world) by exploring and piloting AI-driven solutions.
- “AI playground & unconference sessions”: Regular, informal “sandbox” opportunities for staff to experiment with emerging AI tools in a supportive environment, share discoveries, troubleshoot challenges, and spark new ideas. An “unconference” format allows staff to propose and lead sessions on topics of immediate interest.
- “AI in the discipline” seed funding & showcases: Offer small internal grants or dedicated time for staff to conduct action research or develop pilot projects exploring discipline-specific applications of AI in their teaching. Regular showcases allow these innovators to share their findings, failures, and successes with the broader community.
- Cross-disciplinary communities of practice (CoPs): Establish and support CoPs focused on AI in education. These groups can provide ongoing peer learning, mentorship, and a platform for sharing best practices and co-creating resources.
3. Embedding AI literacy within university policy and principles:
- Principle-driven roadmaps: Develop a clear set of institutional principles for AI in education (e.g., human-centredness, equity, transparency, academic integrity, student agency). These principles act as a compass, guiding individual and collective decision-making.
- Policy co-development and review: Involve staff in the ongoing development and review of university policies related to AI, ensuring they are practical, ethical, and supportive of innovation.
- Living documents: Treat AI policies and principles not as static rules, but as living documents that evolve in response to technological advancements, staff experiences, and emerging ethical considerations.
Mitigating unintended consequences through principled action
The rapid evolution of AI inevitably brings unintended consequences, both positive and negative. A principled approach to AI literacy provides a robust framework for navigating this uncertainty. Ethical principles, such as those championing student agency, fairness, and transparency, serve as a vital roadmap. They help staff and the institution:
- Proactively address challenges: For instance, principles related to academic integrity can inform the development of strategies to address the misuse of AI in assignments before it becomes widespread.
- Recognise and amplify unexpected benefits: Staff experimenting with AI might discover novel pedagogical uses not initially envisioned. Principles encouraging innovation and sharing can help these positive outcomes spread.
- Adapt with agility: When new AI capabilities emerge or unforeseen ethical dilemmas arise, a shared set of principles allows for more agile and coherent responses than rigid, prescriptive rules.
- Foster a culture of reflection: Regularly revisiting how AI use aligns with core principles encourages ongoing critical reflection and adaptation among staff.
The key is to create a continuous feedback loop where the experiences of staff – their successes, challenges, and ethical dilemmas encountered “on the ground” – directly inform the refinement and application of these guiding principles.
Charting the course for an AI-literate future
The journey towards comprehensive AI literacy for university staff is not a short-term project but an ongoing commitment. It requires a cultural shift that values experimentation, collaboration, and critical engagement over passive adoption or fearful resistance. By moving away from a purely prescriptive model and embracing programmes that activate the inherent creativity and pedagogical expertise of our educators, universities can do more than cope with AI – they can lead the way in shaping its responsible and transformative use in higher education.
Empowerment is the cornerstone. Investing in well-designed, principle-driven AI literacy programmes is an investment in our academic staff, our students, and the very future of learning. The landscape of AI is constantly shifting, but with a compass of strong ethical principles and a crew of creatively literate educators, we can navigate the terrain with confidence and purpose, ensuring that AI serves to enhance human intellect, foster deeper learning, and create a more equitable and engaging educational future for all.
References
Dal Ponte, C., English, N., Lyons, K., & Oliveira, E. A. (2025). Scaffolding GenAI literacy and fluency at scale: A practical self-assessment framework for personalised learning [Preprint]. ResearchGate. https://www.researchgate.net/publication/391896514
Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). AI literacy: Definition, teaching, evaluation and ethical issues. Proceedings of the Association for Information Science and Technology, 58(1), 504–509. https://doi.org/10.1002/pra2.487
Schober, Randal, “Introducing the BLOOM-AI Framework. A pedagogical model designed to guide the integration of artificial intelligence into higher education.” (2025). Teaching and Learning with AI Conference Presentations. 33.
https://stars.library.ucf.edu/teachwithai/2025/thursday/33
United Nations Educational, Scientific and Cultural Organisation. (2024). Al competency framework for students. UNESCO. https://doi.org/10.54675/JKJB9835
This blog post was created with the assistance of Gemini 2.5 (preview) and the Canvas function (you should try the Canvas function, it’s ace!)
Leave a Reply