Interviews
Tom Findling, Co-Founder and CEO of Conifers – Interview Series

Tom Findling is a strategic leader with a proven track record in go-to-market (GTM), product and data science. Having served as Chief Customer Officer at IntSights (acquired by Rapid7) and subsequently as Senior Director of Product at Rapid7, he brings a unique blend of strategic vision and execution to the table running large scale operations. Additionally, he led GTM and product roles at VMware and SUS.
Conifers offers an AI-powered CognitiveSOC platform that enhances the capabilities of security operations centers by integrating with existing tools, ingesting an organization’s unique data and risk profile, and continuously adapting investigation workflows. It tackles common challenges such as excessive alert volumes, limited visibility into SOC performance, and generic one-size-fits-all systems by enabling deeper investigations, modeling institutional knowledge, and using feedback loops to refine accuracy and reduce noise. The platform is designed to deliver measurable results, including a threefold return on investment and an 87% reduction in investigation time.
You’ve had a long career in cybersecurity, from IntSights to Rapid7—what experiences ultimately led you to co-found Conifers, and what problem did you set out to solve?
During my career, I’ve witnessed security operations teams struggle under the weight of too many alerts, tools, and pressure. At IntSights, I observed how difficult it was for humans to act on the intelligence being produced. At Rapid7, I took on the challenge of scaling our team with fewer people to support a larger customer base by redesigning how work was done and implementing data science to handle the high-volume tasks. That was when I started believing that running a security operations center (SOC) in the traditional way wouldn’t last. Conifers was born from our efforts to solve that scaling problem. We wanted to build a solution that could scale to deal with the ever-increasing volumes of threats and data without burning people out. So, we created CognitiveSOC, our AI SOC agents platform.
Conifers positions itself as an “AI SOC force multiplier.” How does your CognitiveSOC platform differ from traditional SOC automation tools?
Most automation tools in the SOC have been built on static playbooks. They execute a set series of steps but fail when attackers behave in unpredictable ways or when the environment changes. CognitiveSOC is an agentic AI platform that can learn and adapt to changing environments. It correlates data, uses institutional knowledge, and makes conclusions without scripting every step of the process. The platform supports analysts rather than replacing them and continually gets stronger through feedback and learning rather than requiring manual upkeep. That steady growth in capability is what makes it a true force multiplier.
SOC teams often complain about alert fatigue and burnout. How does Conifers address this challenge in practical terms?
CognitiveSOC tackles alert fatigue by reducing the noise before it reaches an analyst. It takes the constant flood of alerts from across tools and consolidates them into investigations that already contain the relevant context. Instead of an analyst staring at a deluge of blinking alarms, they’re reviewing a much smaller set of investigations that include historical context, evidence, and likely causes. Analysts can then digest information and make decisions instead of chasing raw signals, which helps lessen fatigue and burnout.
Trust is critical in cybersecurity—how does your human-in-the-loop approach build confidence in AI-driven decision-making?
The key to trust is transparency and control. Analysts remain in charge of the system and are presented with recommendations and explanations they can confirm or override and provide a rating. Over time, as they see the system making accurate calls, they can allow it to handle more actions automatically. This approach enables teams to test and correct the system while keeping authority in human hands. We build confidence and adoption by treating AI as a partner that learns from analysts instead of a black box that makes unexplained choices.
Your staged implementation framework allows gradual adoption. Why did you design it this way, and how does it help organizations overcome resistance to AI?
We knew from the start that the biggest barrier to adoption would be the trust in adopting AI. If you walk into a SOC and tell the team to hand their operations over to an AI system, the answer will be no. By breaking adoption into stages, we allow organizations to start small with a limited number of use cases and scale them over time. Each stage demonstrates value and builds trust, which makes the next stage easier to accept. This gradual path builds trust, replaces hesitation with evidence, and ensures that teams feel in control.
Metrics are a big part of proving value in security. What KPIs should organizations track to measure progress toward an autonomous SOC?
The most important measures are the speed of detection, response, and remediation, as well as the quality and the ratio of raw alerts to meaningful, contextual investigations. Another measure is how much workload the system can take on without human involvement. These indicators show whether the SOC is becoming more efficient, whether analysts are being empowered to focus on higher-value work, and whether the organization is moving closer to a model where AI takes on the heavy lifting. Tracking those numbers provides clear proof of progress.
Conifers highlights integration with existing incident management systems. Why was non-disruption such a core design principle?
Security teams have invested heavily in their tools and processes. Most existing technology requires SOC teams to “context switch” and move to another tool to review and resolve alerts. We remove that friction by meeting the analysts where they are, embedded within the tools they already work with.
What do you see as the phased path from today’s semi-automated SOCs to a future where AI agents hold more authority over tools and data?
The path toward an autonomous SOC begins with augmentation, where AI analyzes and investigates alerts with human oversight. From there, organizations move into delegation, allowing the system to handle more and more use cases autonomously. The final stage is full autonomy, when AI agents are trusted to manage detection and response across environments while people guide strategy and handle unique situations. Today, most teams are still in augmentation with some early delegation, but comfort with handing off routine scenarios is growing quickly and will lay the groundwork for full autonomy.
Looking ahead five years, how do you expect SOC operations to evolve as AI matures—both in terms of technology and the analyst role?
In five years, SOCs will run on systems that look more like autonomous agents than dashboards. These agents will detect, respond, and adapt to new threats, and they’ll also tune policies and share knowledge across organizations in real time. As that capability matures, the analyst role will shift to oversight, strategy, and complex investigations. The work will be less about clearing endless alerts and more about applying expertise where it has the greatest impact. The result will be a SOC that feels less like a call center and more like a mission control room.
Thank you for the great interview, readers who wish to learn more should visit Conifers.












