Connect with us

Interviews

Ashley Rose, Founder and CEO of Living Security – Interview Series

mm

Ashley Rose, Founder and CEO of Living Security, is a serial entrepreneur and cybersecurity innovator focused on redefining how organizations address human risk in security. Since founding the company in 2017, she has led the development of a data-driven, behavior-focused approach to cybersecurity that moves beyond traditional awareness training toward measurable risk reduction and cultural change. Drawing on her background in product leadership and entrepreneurship, she has helped scale Living Security into a fast-growing SaaS platform used by enterprise organizations, while also contributing to the broader cybersecurity ecosystem as a mentor, advisor, and advocate for initiatives like Women in CyberSecurity.

Living Security is a cybersecurity SaaS company that focuses on Human Risk Management, helping organizations identify, measure, and reduce the risks associated with employee behavior. Its platform aggregates behavioral, identity, and threat data to pinpoint high-risk users and deliver targeted, real-time training and interventions designed to prevent breaches before they occur. By combining analytics, automation, and engaging training methods such as simulations and gamified experiences, the company enables enterprises to shift from compliance-driven security awareness to proactive, measurable risk reduction across their workforce.

You founded Living Security in 2017 after earlier experience building and scaling a consumer product business and working as a product owner. What specific moment or realization led you to shift into cybersecurity and focus on human risk, and how has that original thesis held up as AI becomes part of the workforce?

In 2017, most organizations were treating security awareness training as a checkbox exercise, and it wasn’t changing behavior. The turning point was realizing that if human behavior was driving breaches, the answer couldn’t be more forgettable training. Drew Rose, Co-Founder of Living Security, was running security programs himself and started gamifying them, building early prototypes that became cybersecurity escape rooms. We saw firsthand that when you made security experiential, people engaged, learned, and actually changed behavior. That became the foundation for Living Security.

As co-founders, Drew and I quickly realized that engagement was just the starting point. As we scaled those experiences into a platform, we began to see patterns in how people behaved, where they struggled, and where risk was actually concentrated. That exposed a much bigger gap: organizations had no real visibility into human risk or how to reduce it in a targeted way. That insight led us to pioneer Human Risk Management, which is about identifying, measuring, and reducing risk based on individual behavior, access, and threats, not just delivering training. As AI becomes embedded in the workforce, that original thesis has only expanded: the challenge is no longer just human behavior, but how humans and AI systems operate together. Humans are still at the center, now managing and deploying AI agents, which means you have to extend visibility to those agents and tie that risk back to the individual. That’s what’s driving our evolution into Workforce Security.

You have argued that human error is an incomplete explanation for breaches. How should organizations rethink workforce risk today when both human behavior and AI-driven actions are contributing to the attack surface?

Framing breaches as “human error” oversimplifies the problem and obscures where risk actually comes from. Human risk isn’t just about mistakes, it’s shaped by a combination of behavior, access, and exposure to threats. Some employees have privileged access to sensitive systems, some are targeted more frequently, and some exhibit riskier behaviors, so risk of breaches aren’t evenly distributed. To truly understand risk, organizations need visibility into where those factors intersect and where human risk exist.

As a result, organizations need to move beyond awareness-based models and rethink workforce risk as a shared, operational challenge, one that spans both human risk and AI-driven actions. This means focusing on continuous visibility into how work is performed, understanding where risk is concentrated, and applying targeted, real-time interventions across a hybrid workforce rather than treating risk as isolated user mistakes.

AI tools are now drafting code, handling workflows, and even making decisions. At what point do AI systems stop being tools and start being treated as part of the workforce from a security perspective?

AI systems stop being just tools and start being part of the workforce the moment they take action inside enterprise environments. At that point, they introduce risk in the same way employees do: through the actions they take, the permissions they operate with, and the data they touch. The shift for organizations is recognizing that AI agents aren’t just productivity layers—they’re operational participants, and they need to be governed, monitored, and secured alongside human users within a unified workforce risk model.

How should enterprises approach governance when risk is no longer limited to employees, but extends to AI agents operating with varying levels of autonomy and access?

Enterprises need to move beyond policy-based governance and treat it as a continuous, behavior-driven process that applies to both humans and AI agents. Most organizations already have AI policies in place, but the gap is in enforcement and visibility, especially as employees adopt tools beyond sanctioned environments and AI systems operate with varying levels of access.

Effective governance starts with clearly defining acceptable use based on role and data access, but it also requires real-time guidance embedded into workflows and continuous measurement so organizations can see where risk is emerging and adapt. Ultimately, governance has to reflect how work actually happens today: across a hybrid workforce where both humans and AI systems are making decisions, accessing data, and introducing risk.

Living Security has focused heavily on behavior-driven security models. How does that philosophy translate when some behaviors are now coming from AI systems rather than humans?

Living Security’s behavior-driven approach extends naturally to AI because the focus has never been just on who is creating risk, but how risk is introduced through actions. Whether it’s a person or an AI system, risk shows up in behaviors, how data is accessed, what actions are taken, and how decisions are executed inside workflows. As AI systems take on more operational responsibility, that same model applies: organizations need visibility into those behaviors, along with the ability to guide and intervene in real time.

That’s what led to the development of Livvy, the AI intelligence that powers the Living Security platform—applying predictive intelligence and continuous monitoring across both human and AI activity. Instead of treating AI as a separate challenge, it enables a more unified approach where behavior, human or machine, is continuously measured, guided, and managed within a single workforce risk model.

Many organizations still rely on periodic security awareness training. Why does this model break down in modern environments, and what does a truly adaptive, data-driven approach look like in practice?

Periodic security awareness training breaks down because it was built for a static threat landscape and assumes risk can be reduced through broad education. In reality, most incidents stem from everyday operational behaviors, not a lack of training, and risk is often concentrated among a small subset of users. A more adaptive, data-driven approach focuses on continuously identifying where risk actually exists and delivering targeted, real-time guidance in the flow of work—shifting from training completion to measurable risk reduction.

Your platform emphasizes quantifying human risk using real-world data. What are the most important signals organizations should be tracking today to understand risk dynamically rather than retrospectively?

Organizations should focus on behavior, identity and access, and threat exposure, signals that reflect how risk is created and where it concentrates across the workforce. That now extends to AI as well, including what tools employees are using, what access those systems have, and how they’re being configured or prompted. On their own, these signals are useful, but the real value comes from how they come together to tell a story about risk.

For example, a CFO who has access to financial systems, isn’t using MFA, is using AI tools connected to sensitive data, and is being actively targeted by phishing campaigns represents a very different level of risk than a BDR with limited access and lower exposure. The risk isn’t just in what someone does, but in what they have access to, the systems acting on their behalf, and how often they’re being targeted. When you can see those factors together, you can understand where a breach is most likely to occur and take action in real time, whether that’s alerting the individual, tightening controls, or prioritizing intervention for that group.

AI is creating new vulnerabilities, but it is also being used defensively. Where do you see the balance shifting, and are we heading toward a net-positive or net-negative security impact from AI?

AI is doing both, expanding the attack surface while also improving how organizations detect and respond to risk. On one hand, it’s enabling more complex workflows and autonomous actions that can introduce new vulnerabilities; on the other, it allows security teams to analyze behavior at scale and act more quickly. Where the balance lands depends on how well organizations adapt. Right now, many are still catching up on visibility and governance, especially as AI is used in ways they haven’t fully mapped. Long term, it can be net-positive, but only if organizations treat AI as part of the workforce and apply the same level of monitoring, guidance, and control as they do for human-driven risk.

Not all employees or AI systems pose equal risk. How should organizations prioritize intervention without creating friction or over-surveillance?

Not all risk is equal, and treating it that way is what creates friction. The key is to focus on where risk is actually concentrated—since roughly 10% of users drive 73% of risk—and apply targeted interventions there rather than broadly across the entire workforce. That means using behavioral, access, and exposure data to prioritize who and what needs attention, and delivering guidance in the flow of work instead of layering on more controls. Done right, it reduces friction by making the secure path the easiest one to follow, rather than increasing surveillance across everyone.

If we fast forward five years, what will workforce security look like, and what are most organizations still underestimating today?

If we fast forward five years, workforce security will be defined by how well organizations can understand and manage risk across both humans and AI agents operating together. It won’t be about periodic training or static controls, it will be about continuous visibility, real-time risk assessment, and the ability to take action dynamically as behavior, access, and threats change. Humans will still be at the center, but they’ll be managing and extending themselves through AI, which means security has to account for both.

What most organizations are underestimating is that there’s already a visibility gap in human risk today, and AI is compounding it. Many think they have an AI strategy, but in reality, they lack visibility into both their people and the tools those people are using. Step one is understanding human risk, behavior, access, and exposure to threats. Step two is extending that visibility to the AI agents employees are using, which are only as powerful and risky as the access and decisions humans give them. Without that foundation, organizations aren’t just behind on AI, they’re operating with growing blind spots across their entire workforce.

Thank you for the great interview, readers who wish to learn more should visit Living Security.

Antoine is a visionary leader and founding partner of Unite.AI, driven by an unwavering passion for shaping and promoting the future of AI and robotics. A serial entrepreneur, he believes that AI will be as disruptive to society as electricity, and is often caught raving about the potential of disruptive technologies and AGI.

As a futurist, he is dedicated to exploring how these innovations will shape our world. In addition, he is the founder of Securities.io, a platform focused on investing in cutting-edge technologies that are redefining the future and reshaping entire sectors.