Preparing for futures in an evolving world: future-proof use of genAI in an undergraduate internship subject
The significantly redesigned undergraduate internship elective, MULT20010 / MULT30019 Internship: Personal and Career Growth (formerly Arts internship), was delivered for the first time this summer. The new assessment design informed the subject’s approach to genAI. In this blog post, the subject’s teaching team, Dr Elena Balcaite and Dr Tahlia Birnbaum, reflect on these updates and innovations.
MULT20010 / MULT30019 Internship: Personal and Career Growth is an elective subject available to all Bachelor of Arts students as well as students from other bachelor programs as breadth. Since taking over the subject’s coordination in 2024, Dr Elena Balcaite, Teaching Specialist (Experiential Learning), was keen to engage students in ongoing reflection as a means of directing them towards action. The subject has always been grounded in the Experiential Learning philosophy with reflection being a key component of assessment. A reflective essay culminated students’ internship experiences. While a meaningful assessment, the essay did not always extend to students’ futures in tangible ways.
In the new subject design, a personal/professional portfolio replaced the reflective essay. This portfolio assignment gives students the opportunity to extend their reflections and turn them into a tangible resource for their futures. In doing so, students can explore different formats and approaches. For example, they can opt to write a full job application, create a personal website or develop a professionally compelling LinkedIn profile. Students also have a lot of freedom in how they choose to use genAI, but this freedom comes with an important responsibility – to apply human judgement to their AI use and act as a critical ‘human in the loop’ (Mollick, 2024). With 86% of university students reportedly using genAI (Digital Education Council, 2024), our stance is that AI use must be future-proof. This means always positioning the human as a critical actor, who applies judgement in human-AI interactions. Students have to plan their use of genAI ahead of time and explain their intentions in detail through a portfolio plan – an assessment that precedes the portfolio submission. In the plan, students set out what they would like to create as their portfolio, why they chose this type of portfolio, and how it aligns with their internship learnings and future plans. In addition, students explain the production steps involved in the creation of their portfolio, which include an explanation of – and guardrails for – any genAI use. This scaffolded assessment design ensures that we, as the teaching team, gain insight into the process students undertake to produce their portfolios. We are, thus, able to offer targeted feedback before the final portfolios are due and to pre-emptively address any potentially inappropriate engagement with genAI.
To foster closer engagement in the students’ learning process, the subject was intentionally redesigned to help us better understand each student by getting to know them on an individual level. This meant having fewer whole-of-cohort seminars and instead predominately engaging with students in compulsory small group consultations. In groups of four, students can examine their internship experiences more closely. Ahead of each consultation, they complete an LMS learning module, which culminates in a reflection exercise and prepares them for the discussion with peers. The reflection exercises also contribute to one of the assessment tasks, and the peer discussions ensure their integrity.
This type of nested assessment design scaffolds student learning around the appropriate and critical use of genAI. The use of genAI is permitted in all four assessment tasks, starting with a personal learning plan. From the start of the subject, students are informed about the permissible uses of genAI, with each assessment brief on the LMS providing information of how genAI might be useful for the task. As educators, permitting the use of AI compelled us to critically consider its various applications and determine what we find acceptable. The permissible uses include provoking reflection, brainstorming and carrying out technical tasks associated with the creation of personal/professional portfolios. For example, students could use AI to learn new technical skills or outsource tasks like code writing to a chatbot. The subject’s genAI guidelines supplement these task-specific statements by echoing the importance of applying critical judgement in interactions with Large Language Models. Specifically, the guidelines instruct students to use AI critically and transparently, framing these instructions within the ethical and legal considerations, as well as AI’s tendency to make mistakes. In addition, we model transparency in our own use of genAI. As students progress through the LMS learning modules, every now and then they encounter our own declarations of AI use. For example, we have developed scenarios for an email writing task in conversation with ChatGPT; we declare this in the module, noting the original prompt and how we transformed – and added to – the chatbot’s responses.
Over the summer, a sizeable proportion of students opted to use genAI for one or more of their assessment tasks; this allowed us to test our approach and adjust. Their declarations varied in the level of detail, which we addressed through individual and whole-class feedback, as well as in peer group consultations. Generally, students responded well to this feedback, and their declarations became more specific as the term progressed. However, for future deliveries, we decided to provide more explicit guidance by developing a subject-specific declaration template for our students to use, which extends the general University’s guidance by prompting critical questioning of AI use. In light of the new declaration template, we have also updated all assessment rubrics to explicitly include an evaluation of AI use, where applicable to the student’s work. These additions and adjustments will contribute to more actively engaging students’ evaluative judgement when assessing their interactions with genAI and its outputs (Bearman et al., 2024).
While it is up to the students’ whether to use this technology, having the option encouraged them to critically consider their own skills and assess if and how GenAI would enhance their work. Many students who opted not to use GenAI still included statements in their portfolio plans with the reasons for rejecting the technology. This demonstrated to us that most students were critically engaging with genAI at some level, and reflecting on its potential uses, ethical concerns and limitations.
Overall, we felt that getting to know our students well through small peer group consultations is what made our approach work. Our key takeaway is the importance of considering AI use within the context of whole-of-subject design. In our case, the peer group consultations were a critical part of the overall approach to learning and assessment in this subject, which gave us the confidence to grant students this level of freedom and choice. For future deliveries, we are considering an extension to the portfolio assignment – an interactive oral component where students present their portfolios to the whole class or their peer groups.
Would like to learn more about this subject, its design and approach to genAI? Please reach out to Elena Balcaite and Tahlia Birnbaum.
References
Bearman, M., Tai, J., Dawson, P., Boud, D., & Ajjawi, R. (2024). Developing evaluative judgement for a time of generative artificial intelligence. Assessment & Evaluation in Higher Education, 49(6), 893–905.
Digital Education Council (2024). Digital Education Council Global AI Student Survey 2024: AI or Not AI: What Students Want.
Mollick, E. (2024). Co-intelligence: living and working with AI. Penguin.