Connect with us

Interviews

Jon Friskics, Principal Technical Author, Pluralsight – Interview Series

mm

Jon Friskics, Principal Technical Author, Pluralsight, is a seasoned educator and content leader specializing in software development and AI-focused learning experiences. In his current, he creates expert-led video courses and hands-on labs covering technologies such as Claude, Node.js, TypeScript, Tailwind CSS, and Python, building on a long career within the company that spans senior authoring, learning architecture, and leaderships in training and curriculum strategy. Prior to this, he played a key in shaping scalable, multi-modal learning systems and guiding thousands of technical content creators with evidence-based instructional design practices, while earlier in his career he led content strategy at Code School and taught a wide range of technical subjects at the University of Central Florida, establishing a strong foundation in both education and real-world development.

Pluralsight is a leading technology skills development platform that provides online courses, hands-on labs, and skill assessments to help individuals and organizations build expertise in areas such as software development, AI, cloud computing, and cybersecurity. Founded in 2004, the company has evolved into a comprehensive learning ecosystem used by enterprises and professionals worldwide, combining expert-authored content with insights to close skill gaps and accelerate workforce development in an increasingly technology-driven economy.

Your career spans interactive curriculum design, large scale technical learning systems, and advanced AI tooling education. How has that background shaped your perspective on why strong engineering judgment still matters in an era of AI assisted coding?

My experience has shown me that strong engineering judgment is about more than writing code. It’s about understanding systems and long-term consequences. AI can automate tasks and create a framework that leads to solutions, but it doesn’t always grasp the impact of decisions on users or systems in ways that are predictable. Human judgment ensures AI is used to augment productivity safely, Engineering judgment is more valuable than ever, guiding teams to leverage AI effectively while maintaining quality and reliability.

Pluralsight has long focused on closing technical skill gaps. How do you see that mission evolving now that AI collaboration skills must sit alongside traditional software development fundamentals?

Pluralsight’s mission is to equip learners with the foundational technical skills they need to succeed. As AI becomes a collaborator in development tasks, those fundamentals remain essential, but teams also need to understand how to work with AI responsibly and validate its outputs. Even though AI can generate code, it doesn’t replace the need for  coding skills, and it can enhance them by layering workflow understanding and systems thinking on top of existing expertise. Pluralsight helps learners build on existing foundational skills and maintain strategic thinking through learning solutions that include on-demand courses, hands-on labs, and human expert-led workshops that evolve alongside tech innovation.

What specific architectural, deployment, and risk management skills do you believe are most at risk if developers become overly dependent on AI generated code?

Developers relying too heavily on AI code generation and accepting their output without taking the time to understand what was generated may end up weakening their strategic skills like architectural thinking and risk assessment over time. Understanding how components interact and designing for reliability are capabilities that are learned through experience in many different situations.. This means overreliance on AI could not only lead to hidden vulnerabilities and system instability, but debilitate developers’ long-term problem-solving abilities, allowing those problems to go unnoticed or unsolved until it’s too late.

As autonomous coding tools gain traction, where do you see the biggest disconnect between what these tools promise and what engineers are actually prepared to validate or oversee?

Continuous learning is essential for engineers as they work alongside AI-assisted development tools and autonomous coding systems. Autonomous coding tools promise speed and accuracy in generating functional code, but they lack an understanding of system interactions, security, and business impact, and that means that you have to provide that missing context. The disconnect lies in assuming AI output is complete or correct in the absence of human oversight. When validation steps are skipped or rushed, teams risk introducing costly bugs, security vulnerabilities, or architectural inconsistencies. This reinforces the need for engineers to continuously update their skills so they can effectively manage and validate AI-generated work.

How should companies rethink their upskilling strategies to ensure developers know when to trust AI suggestions and when to slow down and apply deeper review?

Upskilling should emphasize knowing when AI output is reliable versus when deeper review is needed, including scenario testing and prompt validation. This approach reinforces judgment alongside coding skills, ensuring engineers can trust AI selectively rather than over-relying on generated code. L&D programs that provide structured, hands-on learning experiences allow developers to experiment with AI-assisted workflows to see how generated code behaves within full applications and exercise that judgement in a sandbox environment. By leaning on both expert-led instruction and practical exercises, engineers can better strengthen the critical thinking skills needed to evaluate AI-generated outputs responsibly.

In fast moving product environments, how can engineering leaders prevent AI generated shortcuts from introducing long term technical debt or security vulnerabilities?

Leaders have to be enforcing governance frameworks and risk assessment for AI-generated code. Establishing strong boundaries and auditing outputs could help prevent long-term technical debt and security vulnerabilities. I’d also suggest developer education focused on safe coding practices and architectural awareness to ensure their engineers understand the trade-offs behind AI-generated suggestions. Regular hands-on review exercises and scenario-based training can help reduce the likelihood that shortcuts accumulate into hidden system risks.

What practical frameworks or guardrails do you recommend organizations adopt to keep AI coding a collaboration rather than a liability?

The tools working best for this are new review protocols, version control tracking, and sandboxed AI experimentation. Leveraging metrics,observability frameworks, and evals will help teams track output quality and reinforce responsible collaboration to ensure AI is a partner in productivity rather than a liability. It’s also valuable for organizations to explore AI-assisted workflows to understand the capabilities and limitations of these tools for their teams’ unique needs. These practices will help teams develop the judgment needed to integrate AI suggestions effectively without compromising code quality or system stability.

Looking ahead, what distinguishes developers who will thrive in an AI augmented future from those who may struggle to adapt?

Developers who excel in an AI-augmented future will combine strong foundational skills with judgment, adaptability, and systems thinking. They understand when to trust AI, when to intervene to guide and redirect it, and how outputs fit into the broader system. Those who struggle may rely too heavily on automation, lack experience with edge cases, or fail to validate results, both risking errors for their organization and missing the valuable learning opportunities that strengthen a developer’s expertise throughout a rigorous career. Continuous learning and hands-on experimentation with AI-assisted workflows will help developers sharpen these skills in a shorter time frame and remain effective as AI coding tools evolve.

Thank you for the great interview, readers who wish to learn more should visit Pluralsight.

Antoine is a visionary leader and founding partner of Unite.AI, driven by an unwavering passion for shaping and promoting the future of AI and robotics. A serial entrepreneur, he believes that AI will be as disruptive to society as electricity, and is often caught raving about the potential of disruptive technologies and AGI.

As a futurist, he is dedicated to exploring how these innovations will shape our world. In addition, he is the founder of Securities.io, a platform focused on investing in cutting-edge technologies that are redefining the future and reshaping entire sectors.