Reconsidering assessment in higher education: the lane approach

The deployment of generative artificial intelligence (AI) tools has fundamentally challenged traditional assessment practices in higher education. As these technologies become increasingly sophisticated and accessible, educators must reconsider how we design, implement, and evaluate assessments to maintain academic integrity while preparing students for a world where AI is ubiquitous. The “lane approach” to assessment — often referred to as the “two-lane approach” between supervised and unsupervised offers a framework for balancing these competing demands.

This blog post examines three key aspects of implementing the lane approach: the rationale behind separating supervised and unsupervised assessments, the characteristics and implementation of supervised assessments, and the role of unsupervised assessments in fostering students’ ethical AI literacy. By strategically incorporating both types of assessment at the program level, institutions can create more coherent educational experiences that develop both disciplinary knowledge and AI capabilities.

Why do we need supervised and unsupervised assessments?

The integration of AI tools into education requires a nuanced approach to assessment. As noted by Professor Jason Lodge of the University of Queensland and his colleagues in their report “Assessment Reform for the Age of Artificial Intelligence,” simply attempting to create “AI-proof” assessments is neither feasible nor desirable 1. Instead, we need a more sophisticated framework that acknowledges the reality of AI while maintaining the integrity of educational qualifications.

The lane approach to assessment, pioneered by Danny Liu and Adam Bridgeman from the University of Sydney, offers such a framework. The two-lane approach proposed that in one lane students could not use AI at all (or with restrictions), while in the other lane they could use AI without restrictions. 

The primary purpose of distinguishing between supervised and unsupervised assessments is twofold. First, it creates opportunities for students to learn how to use AI in an ethical and effective manner. As AI becomes increasingly embedded in professional practice across disciplines, graduates need to develop what the ability to critically engage with, evaluate, and leverage AI tools appropriately. Unsupervised assessments provide a space for students to develop these skills.

Second, supervised assessments ensure that students can demonstrate the core disciplinary knowledge and skills required by their qualification. The Tertiary Education Quality and Standards Agency (TEQSA) emphasises the importance of forming trustworthy judgements about student learning in a time of AI through multiple, inclusive and contextualised approaches to assessment 2. Supervised assessments provide crucial validation points where educators can make these judgements with confidence.

The integration of both supervised and unsupervised assessments at the program level creates holistic approach to program assessment aligned with Course Learning Objectives. Rather than treating AI as a threat to be mitigated through ever more restrictive assessment practices, the lane approach embraces AI as both a subject of learning and a tool for learning.

As Professor Lodge noted when discussing the TEQSA expert group’s approach to assessment reform, they weren’t trying to develop a map but a compass providing direction rather than prescriptive solutions 3. This philosophy underpins the lane approach, offering a flexible framework that can be adapted to diverse disciplinary contexts while maintaining core principles of academic integrity and authentic learning.

S Midford (La Trobe University, Melbourne)

What are supervised assessments, and how do they fit the lane model in terms of AI use?

Supervised assessments serve as critical assurance checkpoints within the lane approach. These assessments are designed to ensure students demonstrate core disciplinary knowledge and skills independently, supporting trustworthy judgements about their learning and upholding academic integrity. Supervised assessments provide crucial validation points in an educational program increasingly influenced by AI.

In the context of the lane approach, supervised assessments may either restrict or permit the use of AI, depending on the specific learning outcomes being assessed. The key characteristic is not necessarily the absence of AI but the presence of supervision that ensures the authenticity of the student’s work. What matters is creating assessment conditions that allow for rich ‘triangulation’ of student learning.

Several forms of supervised assessment can effectively serve as assurance checkpoints:

  1. Interactive oral/viva examinations: Students respond to questions about texts, theories, or case studies in a live, supervised setting. For example, a student might analyse an unseen passage or source in conversation with an examiner, demonstrating authentic knowledge and reasoning.
  2. In-class timed critical analysis: Students write essays or perform close readings in a supervised environment, demonstrating independent analytical skills. These assessments may restrict the use of AI entirely or permit the use of specific tools under supervision.
  3. Live Q&A after presentations: After delivering prepared content, students defend their research through targeted questions, confirming understanding that extends beyond what was prepared in advance, potentially with the aid of AI.
  4. Supervised skills demonstrations: Students may use specific AI tools (e.g., text analysis or visualisation) in a lab setting, with the process and results observed and discussed in real-time.

These supervised assessments should be strategically incorporated across a program, with at least one assurance checkpoint per Course Learning Outcome. This program-level approach ensures continuous evaluation and development of core disciplinary competencies.

The concept of “hurdle subjects” or “hurdle tasks” is central to this model. These are supervised assessments that students must pass to progress in their program, regardless of their performance in unsupervised assessments. This ensures that while students may leverage AI extensively in some contexts, they must also demonstrate independent mastery of essential disciplinary knowledge and skills.

The goal is not to create artificial barriers, but to ensure that assessments meaningfully validate the capabilities a qualification is intended to certify. Supervised assessments provide this validation, maintaining the integrity and value of educational qualifications in an age where AI can simulate many forms of academic output.

What are unsupervised assessments, and how can they help students with AI Use?

Unsupervised assessments serve as learning vehicles that develop students’ ability to engage productively and ethically with AI technologies. In the lane approach, these assessments offer students the opportunity to explore AI’s capabilities, limitations, and ethical implications within specific disciplinary contexts in a society where AI is ubiquitous.

Unlike supervised assessments, unsupervised assessments typically occur without direct oversight, allowing students greater flexibility in terms of when, where, and how they complete their work. While AI use is often permitted and even encouraged in these assessments, it’s important to note that AI may still be restricted in some unsupervised contexts. However, such restrictions can be problematic as there is an assumption that all students have access to AI tools in their word processors or elsewhere.

Several effective forms of unsupervised assessment that develop AI literacy include:

  1. AI-assisted text analysis: Students use AI tools to summarise, annotate, or compare texts, then critically reflect on the process, including the strengths and limitations of AI outputs. Bearman’s work on evaluative judgement is particularly relevant here, as students learn to assess the quality of both their work and AI-generated content 9.
  2. Collaborative research with AI: Students utilise AI for literature reviews, brainstorming, and generating research plans, while documenting their interactions and critical decision-making processes.
  3. Digital storytelling or data journalism projects: Students create multimedia narratives using AI for data visualisation, content generation, or image creation, with all AI use documented and justified. These projects develop what Fawns describes as “postdigital capabilities”-the ability to work fluidly across digital and non-digital contexts.
  4. Critical reflection portfolios: Students document and evaluate AI-generated outputs in research or writing, including consideration of ethical implications and the decision-making processes involved. These portfolios can reveal students’ metacognitive development regarding AI, a key aspect of learning highlighted by Lodge in his work on educational psychology.

The integration of AI into unsupervised assessments represents a significant shift in teaching and learning practices. There is substantial work to be done in meaningfully integrating AI into educational contexts. While frameworks like the lane approach provide helpful structure, there is rarely a one-size-fits-all solution across diverse disciplines and institutional contexts.

It is crucial to integrate AI within disciplinary practices and values. What constitutes appropriate AI use varies significantly across different applications, such as creative writing, scientific research, and legal analysis. Unsupervised assessments must be designed with these disciplinary nuances in mind.

Furthermore,, assessment reform requires a program-level perspective rather than piecemeal changes to individual subjects or units. Unsupervised assessments should build progressively across a program, developing increasingly sophisticated AI literacy alongside disciplinary knowledge.

The lane approach to assessment offers a promising framework for navigating the challenges and opportunities AI presents to higher education. By strategically integrating supervised and unsupervised assessments at the program level, institutions can ensure the integrity of their qualifications while preparing students for ethical and effective engagement with AI technologies.

As we continue to develop and refine this approach, we must maintain the flexible, ethical principle based orientation providing direction rather than rigid prescriptions. The specific implementation varies across disciplines, institutions, and educational contexts, but the core principles of balanced assessment design remain valuable guides.

Through thoughtful assessment reform, we can transform AI from a perceived threat to academic integrity into a powerful tool for enhancing teaching, learning, and assessment in higher education.

Further reading

Bearman, M., Dawson, P., Fawns, T., Nieminen, J. H., Ashford-Rowe, K., Willey, K., Jensen, L. X., Dam?a, C., & Press, N. (2024). Authentic assessment: From panacea to criticality. Assessment & Evaluation in Higher Education. https://doi.org/10.1080/02602938.2024.2404634

Dawson, P. (2021). Defending assessment security in a digital world. Routledge.

Fawns, T., & Schuwirth, L. (2024). Rethinking the value proposition of assessment at a time of rapid development in generative artificial intelligence. Medical Education, 58(1), 14-16. https://doi.org/10.1111/medu.15259

Liu, D., Fawns, T., Cowling, M., & Bridgeman, A. (2023). Working paper: Responding to generative AI in Australian higher education. EdArXiv. https://osf.io/preprints/edarxiv/9wa8p_v1

Lodge, J. M., Howard, S., Bearman, M., Dawson, P., & Associates. (2023). Assessment reform for the age of artificial intelligence. Tertiary Education Quality and Standards Agency. https://www.teqsa.gov.au/sites/default/files/2023-09/assessment-reform-age-artificial-intelligence-discussion-paper.pdf

Posted

Comments

Leave a Reply