Connect with us

Thought Leaders

Where AI Is Actually Improving Learning Outcomes, Where It Creates Friction, and What Higher Education Should Do Next

mm

Artificial intelligence is HERE in higher education. It’s already shaping how students learn, how faculty teach, and how institutions evaluate performance. The question is no longer whether AI belongs in the classroom. Students are using it, employers expect familiarity with it, and institutions must decide how to respond responsibly. The key question is how higher education could utilize AI to prepare our students for the future of work.

What I see across higher education is less ideological than public debates suggest. Students use AI because it helps them get unstuck and move forward. Faculty are experimenting because they want to support learning without undermining standards. Administrators are trying to establish guidance that reflects reality rather than fear. As such AI is forcing higher education to reconsider what it means to demonstrate understanding, originality, and mastery in the first place.

At Westcliff University, our approach has been practical. We look at outcomes, we observe what happens in real courses, we listen to faculty and students, and then we adjust. That process has revealed a clear pattern: AI improves learning when it is embedded in intentional design, and it causes problems when it is treated as either a shortcut or a threat.

Where AI Is Genuinely Improving Learning

The common thread in the areas identified below, is not automation but cognition. AI accelerates feedback, clarifies thinking and supports iteration without intellectual responsibility from the student.

Guided practice and timely feedback

The strongest learning gains appear when AI is used for guided practice. Students benefit when they can ask a question, receive an explanation, try again, and get immediate feedback. That feedback loop is central to learning, especially in large or asynchronous courses where individual instructor attention is limited.

Well-designed AI support tools do not deliver answers, but provide targeted, directional feedback to keep students engaged in the process of discovery. When AI is designed to prompt, question, and scaffold thinking rather than resolve uncertainty, it mirrors the way strong peer learning supports deeper understanding.

A 2025 study in Scientific Reports found that students using an AI tutor learned more efficiently than those in a comparison condition, and they did it with higher engagement and motivation. The takeaway is not about AI replacing teaching. It is that frequent, timely feedback accelerates understanding, and AI can help deliver this type of feedback at scale.

AI can also strengthen writing when it’s used to support revision rather than replace authorship.

Many students struggle with organizing ideas, clarifying arguments, or revising effectively. Used appropriately, AI can help surface structural weaknesses, identify unclear reasoning, and prompt clearer thinking.

At the same time, students must learn how to engage AI responsibly. This includes understanding how to craft effective prompts, recognizing when an AI response may contain hallucinations or inaccuracies, and verifying claims against reliable sources. Teaching students to question AI outputs rather than accept them passively protects the integrity of their work and strengthens their critical thinking.

The difference between learning and shortcutting ultimately comes down to expectations. When instructors require outlines, drafts, and brief reflections explaining what changed and why, students remain accountable for their thinking. They stay actively involved in shaping the work rather than outsourcing it, and they remain the ones ultimately calling the shots. A 2025 systematic review of large language models in education identifies writing and feedback as major use cases while also cautioning against overreliance.

Beyond drafts and revisions, AI can also function as a dialog partner that challenges a student’s argument—asking why a claim matters, what evidence may be missing, or how a particular audience might respond. In this way, writing becomes less of a submission exercise and more of a process of intellectual defense and refinement. Assessing that process provides instructors valuable insight into the development of a student’s critical writing mind.

Reducing barriers for students who need scaffolding

AI can reduce friction for multilingual learners, first-generation students, and returning adults by offering personalized explanations, examples, and clarification on demand. This does not replace instruction. It lowers unnecessary barriers so students can participate more fully.

The real opportunity lies in adaptive scaffolding that adjusts in real time and intentionally tapers support as competence grows. When AI is used to calibrate challenges instead of eliminating them, students build confidence through demonstrated progress, not dependency.

Giving faculty time back for teaching

AI can assist faculty with time-consuming tasks such as drafting rubrics, generating example questions, summarizing discussion threads, or producing first-pass feedback suggestions. The benefit comes when faculty reinvest the saved time into higher-value work: better assignment design, richer discussion, and more direct student support.

Where Institutions Are Running Into Friction

Assessment validity is the central challenge

The most serious issue of learning assessment is not plagiarism in the traditional sense. It is that many common assessments no longer measure learning effectively when AI is readily available.

Student AI adoption is already widespread. The HEPI and Kortext Student Generative AI Survey 2025 reported that 92% of students used AI in some form, and 88% used it for assessments. If an assignment can be completed with minimal understanding, it no longer functions as a valid measure of learning outcomes.

This is why debates about integrity persist. AI is exposing the shortcomings of traditional assessments. When assessment is weak, suspicion grows. Stronger or better designed measurement reduces that tension.

Policy lag and inconsistency

Many institutions are still catching up. The 2025 EDUCAUSE AI Landscape Study reports that fewer than 40% of surveyed institutions had formal acceptable-use policies in place at the time of reporting.

In the absence of clarity, faculty set their own rules and students receive mixed messages. One course encourages experimentation, another forbids AI entirely. This inconsistency undermines trust and makes it harder to teach ethical use of AI and yield benefits.

Performance gains without durable skill

AI can improve short-term performance without building long-term capability. A 2025 field experiment examining GPT-4–based tutoring in math showed that while AI tutoring improved performance during practice, students sometimes underperformed when the tool was removed. The institutional risk lies in confusing short-term performance gains with durable capability, especially when AI masks gaps that only surface once the tool is removed. The implication is straightforward. AI can reduce productive struggle, and struggle is often where learning takes place. If the AI design removes too much cognitive effort, students may appear proficient without developing independent competence.

Equity concerns are shifting

AI has the potential to democratize support, but it can also widen gaps if access and AI literacy vary. Students with better devices, paid tools, and more experience using AI have advantages that are not always visible.

Equity impacts extend beyond access to tools. AI increasingly shapes how students manage time, cognitive load, and emotional strain, particularly for those balancing work, care-giving, language barriers, or re-entry into education. When used well, AI can level the playing field, stabilize learning, and build confidence. When used unevenly, it can deepen invisible disparities.

Governance and data stewardship

As AI becomes embedded in advising, tutoring, and assessment, governance becomes an academic quality issue. Institutions must understand how student data is used, how vendors handle it, and how equity is monitored.

Frameworks like the NIST AI Risk Management Framework provide structure, but governance only works when it is applied collaboratively and transparently. In an AI-enabled institution like Westcliff, governance decisions increasingly function as academic quality assurance, directly shaping trust in credentials, assessment integrity, and institutional reputation.

What Higher-Education Leaders Should Prioritize

1. Redesign assessment to make learning visible

AI detection is not a long-term solution. It is reactive and adversarial, and it does not address the underlying measurement problem.

A more durable approach is assessment redesign that emphasizes reasoning, knowledge processing, and performance. This can include oral defenses, structured follow-up questions, process-based grading with drafts and reflections, applied projects grounded in real constraints, and in-class synthesis tasks.

At Westcliff, we have used an oral-response approach as part of this shift. One example is Socratic Metric, an AI-enabled assessment framework that replaces written discussion questions with recorded student responses to open-ended prompts grounded in course material and, in some cases, a student’s own prior writing. Students receive immediate feedback that encourages elaboration and clarification. Faculty can review student responses to evaluate depth of understanding and authenticity.

The goal is not enforcement. It’s visibility. Oral-response formats reveal how students think under iterative follow-up, which is difficult to outsource and easier to evaluate meaningfully. Socratic Metric is one example among many possible approaches. The broader point is that assessment must evolve to focus on thinking, not just output.

A useful leadership question is simple: if a student uses AI on this assignment, does it still measure the intended learning outcome? If the answer is unclear, that’s where redesign should begin.

2. Treat AI literacy as a core learning outcome

Students are entering a workforce where AI will be embedded in daily work. They need skill in judgment, not just familiarity.

The World Economic Forum’s Future of Jobs Report 2025 highlights the growing importance of AI and data-related skills alongside creative thinking and resilience. AI literacy should include understanding strengths and limitations, recognizing bias and uncertainty, verifying outputs, handling data responsibly, and knowing how to use AI effectively.

This is not about turning every student into a technical expert. It is about graduating people who can collaborate with AI thoughtfully and ethically. Plus, AI literacy goes beyond student outcomes, it is an institutional capability. Faculty, administrators, and academic leaders all require shared fluency to ensure consistency, fairness, and credibility across the learning experience.

3. Put governance in place that builds trust

Good governance shouldn’t slow innovation down, it should be a growth strategy that helps AI scale faster and reliably. That usually means a small, cross-functional group that includes academic leadership, IT, legal/privacy, and student support, with clear roles and decision rights.

It also needs to be straightforward and visible. Faculty and students should know where AI is being used, what data is collected (and what isn’t), who can access it, and how decisions get made. When those basics are clear, people are far more willing to adopt new tools because they feel informed and protected.

4. Invest in faculty enablement

Faculty are the key to meaningful AI integration. They need practical support, not just policy statements.

The most effective efforts are hands-on: assignment redesign workshops, examples of effective practice, clear rubrics, and communities where instructors can share what works. When faculty understand both the strengths and limits of AI, they will be able to design better learning experiences.

Supporting faculty in this transition also means recognizing a deeper shift from being primary sources of content to becoming designers of learning, evaluators of thinking, and stewards of academic judgment.

5. Measure impact, not adoption

AI should be evaluated like any instructional intervention. Adoption alone does not indicate success.

The right questions are outcome-focused: Are students retaining knowledge? Are they transferring or generalizing their learning within new contexts? Are equity gaps narrowing or widening? Are graduates demonstrating independent judgment?

If institutions do not measure these second-order effects, they risk optimizing for efficiency while quietly undermining confidence, equity, and long-term capability. Measuring impact in an AI-enabled institution requires looking beyond performance metrics to understand who benefits, who struggles, and what forms of effort are being amplified or reduced.

AI is an Amplifier. What it Amplifies is Up to Us.

Knowing that AI integration is a certainty, the defining question for higher-education leaders is whether institutions will redesign learning intentionally or allow legacy models to erode under its weight.

AI is neither inherently beneficial nor inherently harmful. It simply amplifies whatever a learning system already rewards whether that system is effective or ineffective.

If higher education rewards superficial completion, AI will accelerate it. If institutions design for reasoning, reflection, and authentic performance, AI can support deeper learning and better workforce preparation.

The institutions that succeed will redesign assessment, teach AI literacy as a core competency, and govern AI in ways that protect trust while allowing responsible innovation. That is the next phase of academic leadership.

Anthony Lee, Ed.D. is President of Westcliff University and a higher-education leader focused on workforce readiness and the responsible integration of emerging technologies into teaching, learning, and assessment.