stub AI in Higher Education – Balancing the Risks and Rewards - Unite.AI
Connect with us

Thought Leaders

AI in Higher Education – Balancing the Risks and Rewards

mm

Published

 on

A significant portion of the discussion around generative AI tools has focused on the challenges related to academic integrity and AI plagiarism. Cheating has dominated the discourse.

As a result, many administrators and instructors’ primary focus has been a search for tools that uncover AI-generated writing. For higher education leaders today, the search for reliable AI detection tools may be a futile one. Instead, the focus should be on how AI can enhance the academic experience and evolve assessment practices to better evaluate learners’ understanding.

AI detection; a flawed proposition?

To date, AI detection tools have fallen short of uncovering AI-generated responses accurately and without bias. Researchers at Maryland found that even the “best-possible detector” performs only marginally better than a random classifier. Another study of 14 detection tools by researchers in six countries found that the accuracy rate of detection tools varied widely — between 33% and 79%.

AI detection tools also introduce bias. According to a Stanford study, the solutions mislabeled English as a Second Language (ESL) students’ writing as AI-generated more than half the time. Similar concerns have also been raised about how these tools wrongly classify writing by those with autism spectrum disorder as AI-generated.

Recent research on AI detection tools with a group of clients found that users had very little confidence in the results. Making matters worse, our findings confirmed what researchers found elsewhere — writing was often mislabeled as AI-generated and accuracy was too low to be usable with students and for day-to-day academic integrity purposes.

The reality is that today’s tools aren’t up to the task without raising serious accuracy and ethical concerns, and they may never be. There’s a better way forward – focus on evolving our assessment practices by building more authentic assessments and collaborative learning experience to encourage deeper learning.

Building better engagement

Long before the advent of generative AI tools, educators valued authentic assessments, such as critical thinking exercises, interviews, case studies, group projects and presentations. Studies have shown benefits from assigning learners tasks like these that require them to problem solve, think critically and self-reflect instead of simply recalling knowledge. For a business course, an authentic assessment could look like conducting a negotiation with a group of peers.

Giving students the opportunity to demonstrate critical thinking and problem solving provides them with the skills required to eventually become successful professionals, according to researchers who conducted a literature review of the topic.

The debate around AI plagiarism has rekindled the push for instructors to develop assessments that evaluate more deeply while also lowering the efficacy of AI-generated responses. As Cecilia Ka Yuk Chan, head of professional development at the University of Hong Kong, wrote, teachers must “develop assessment tasks that require critical and analytical thinking to avoid AI-assisted plagiarism.”

Authentic assessment takes on even more importance in the era of generative AI. Tasks that focus on critical thinking, personal perspectives, and self-reflection are much harder for generative AI technologies to produce in a way that appears genuine. Activities might also look to explore subject areas where these tools do not have as much historical data with which to work, such as current and local events, personal experiences, and future predictions.

Developing these kinds of authentic assessments is time intensive though. It requires time-strapped instructors to potentially revamp curriculum and create entirely new assignments for students.

Ironically, AI tools can help with this challenge. By leveraging AI tools to help with ideation and brainstorming as part of the course design process, coming up with engaging authentic assessment and other activities can become more efficient. However, it’s critical that the instructor is always in control and reviews and approves any AI-generated course design suggestions – it’s a low-risk, high-reward sweet spot for the application of AI.

And digital learning environments can facilitate authentic assessments, project work and group work. They can take place in a single environment and can continue to build on top of each other. By combining the digital learning environment and the possibilities unlocked by generative AI, we may start to see entirely new, innovative and pedagogically sound learning experiences become a reality very soon.

The way forward

Regardless of the pros and cons of AI, its use will continue to expand. AI will deliver greater opportunities for students and institutions as the future unfolds. Institutions need to focus on maximizing AI’s benefits and unlocking its potential in the learning experience rather than attempting to limit possible threats and look to authentic assessments as a way forward.

AI will bring about change. Discourses and debates around AI have often elicited comparisons with previous technologies. The advent of spell checkers and calculators in the classroom sparked conversations around whether these tools were a help or hindrance to students' actual ability to learn. Much in the way those tools have become common in everyday use within academics, AI can be a tool to help students. As such, a fundamental rethink of academic integrity and many other parts of a learner’s journey will be essential for success.

Flexible policy and practices are needed. With AI generative tools here to stay, it’s no longer feasible to maintain restrictive policies, especially knowing that generative AI is on its way to becoming a part of everything we interact with (think copilot in Microsoft Office). The line between AI-assisted and AI plagiarism is becoming more blurred every day.

Establish a policy. Establishing a policy framework that is supported by the institution’s unique culture with clear guidelines to take advantage of AI with safeguards is essential. Departments and instructors should have autonomy to apply these policies relative to their subject matter. Co-creation and discussing practices with students are also crucial to help create a culture of trust across an institution.

Empower instructors to develop ethical approaches. Teachers are the engines powering learning and supporting them is fundamental to providing great experiences for learners in the AI era. Institutions need to empower instructors to embrace authentic assessment practices, including leveraging the power of AI to make administrative and course design tasks more efficient.

Time to learn

AI tools will only become more ingrained into the processes of our daily lives, including those in the classroom. To realize the benefits these tools afford, instructors and administrators need training and institutional support. Institutions must provide them with the knowledge and skills required to harness the opportunities while reducing the risks. Those opportunities include achieving a long sought-after goal – evaluating learners for their ability to apply knowledge in real world situations. And those that harness AI's power to build better learning experiences will ensure students learn in the AI era.

As Chief Product Officer, Nicolaas leads the product strategy for Anthology’s holistic EdTech ecosystem. Nicolaas has nearly 20 years of experience in EdTech, having worked for several institutions around the world. He holds a degree in Artificial Intelligence and Natural Language Processing from the University of Cambridge.