Interviews
Elizabeth Nammour, CEO & Founder of Teleskope – Interview Series

Elizabeth Nammour, CEO and founder of Teleskope, is a security engineer turned founder whose career spans data security, software engineering, and innovation roles at some of the world’s largest technology organizations. While working as a senior software engineer focused on data security at Airbnb, she confronted the operational challenge of understanding and controlling enormous, rapidly growing data estates spread across dozens of systems. That experience, combined with earlier technical and strategic roles at Amazon and Booz Allen Hamilton, shaped her perspective on how modern organizations struggle to govern sensitive data at scale and ultimately led her to build a company addressing that gap.
Teleskope is a modern data security platform designed to help organizations continuously understand where their data lives, how it is used, and what risks it creates as environments grow more complex. Built with developers and security teams in mind, the platform emphasizes precise data visibility, automated remediation, and policy-driven controls across cloud, SaaS, and hybrid environments. By moving beyond static audits and manual processes, Teleskope aims to give organizations a practical foundation for managing data sprawl while enabling responsible AI adoption.
You founded Teleskope after building in-house data security tooling at Airbnb to catalog and classify data at massive scale. What moment convinced you this needed to be a company rather than an internal project, and how did those early lessons shape your product thesis?
When I finished building this product at AirBnB, I had the opportunity to write a post on AirBnB’s blog called “Automating Data Protection at Scale”. I never really anticipated anything coming from that, but the security community responded really favorably and I started to get reached out to by practitioners all over the world. I definitely had this moment of realizing that so many shared the same challenges I faced, and that this product was something the market was genuinely asking for. I leaned a lot on feedback from peers in the early days, and even Teleskope v1.0 was far better than what I had first built at AirBnB. Today our product is bigger and more impactful than I could have ever imagined back then.
Your multi-model classification pipeline blends traditional ML, format-specific models, and GenAI validation. Can you walk us through the decision logic and how you reduce false positives/negatives at scale?
I would definitely recommend reading our blog, which I wrote alongside our head of Data Science, Ivan, about data classification. I will first say that this is an art, as much as it is a science. There’s a tremendous amount of nuance – every time you find a sensitive data entity, the context will be unique. Meanwhile, the scale of data has made this problem infinitely more challenging, because scanning petabytes of production data takes a lot of compute and a lot of time. Basically, there’s a reason this has continued to be viewed as a largely unsolved problem.
Where the art comes in is figuring out how to balance all the tradeoffs – speed, latency, accuracy, cost, and breadth (in data stores, file formats, languages etc.). We have always believed that the answer has to be creative, and has to be multi-modal. This is why we have taken the approach we have, combining many of the available classification methods to have a dynamic and nuanced approach that, to kind of sum it up, is built to use the lightest-weight method possible, without meaningfully sacrificing accuracy. This dynamic approach lets us scan data 10-20x faster than tools that depend on one-size-fits-all LLMs, while also delivering far more accurate results than REGEX or conventional context-based approaches.
You recently introduced Prism, focusing on business-level data understanding and GenAI-powered remediation. What new use cases does this unlock compared to element-level PII detection, and how do you guard against hallucination in remediation actions?
When I first set out to tackle the challenge of data classification and protection, my focus was on reducing actual false positive results. For example, how do we make sure that at least 95% of the time when we flag something as a Social Security Number, it should turn out to really be a SSN. A few years ago, even 80% accuracy across different data element types would have been an improvement.
But by working closely with our customers this past year, it became clear that the “noise” that teams are overwhelmed by isn’t just caused by inaccurate data entity classifications (the traditional “false positives”). The noise is often just as much about being inundated with irrelevant alerts as it is with getting false alerts. What Prism does, is it unlocks our ability to consider much more context – not just “what is this piece of data” or “who’s accessing this file”, but also “what, practically speaking, is this file”. Combining this with information we can intake about the what a given business actually does and cares about, we can deliver a product that caters to each company’s different definitions of “sensitive” data.
Capturing this level of nuanced context is a genuine game changer. Storing hundreds of SSNs in a Google Doc in your personal drive, for example, might be a major risk and violation of your Data Management policy. But having a folder in a secure HR drive full of your employee W2s? That’s expected behavior. Security teams want to be alerted to the former, but getting an alert for every employee’s W2, stored correctly, is just noise. Understanding where and within what context sensitive data resides requires more than just an entity classification model.
We work with a multinational chemical company, Chevron Philips Chemicals. This business would never buy a privacy tool or a standard DSPM, because they don’t really see consumer data risk as a priority. What they do care about, though, is intellectual property in the form of proprietary chemical equations. Being able boil down the essence of a document into a list of clustered labels, we can not only detect unique sensitive elements, but also find instances of these data assets being in the “wrong” places. Combining this context with our automated remediation, we can then take action to archive, delete, redact or move those files to their proper location. Nobody in the data security market is doing this kind of work.
Teleskope highlights continuous discovery across multi-cloud, on-prem, and third-party systems—including shadow data. What does “complete map” coverage look like, and how quickly can you surface unknown stores in a greenfield deployment?
“Complete” is a tricky word here – in reality it’s a bar that is constantly moving, on an even daily basis. That’s how tough it is to manage data sprawl. Our goal has always been for Teleskope to exist wherever our customers’ data exists. We’re ultimately an integration-based product in many ways – we’ve built dozens of proprietary data connectors to be able to crawl, scan and classify data across a wide range of SaaS tools, cloud data stores, and on-premise systems. Most customers start with a few connectors that they view as highest-risk, or where they have the least visibility, so in reality we are rarely everywhere a company’s data resides. However, within each data source, we are constantly crawling their environment to surface new accounts, tables, new blobs, files, messages etc. So wherever we are, we’re finding data, new and old, in close to real time.
For AI security & governance, how do you track lineage between training datasets, models, prompts, and outputs for auditability?
We really have three core ways we support AI security and governance. First, is our ability to apply our classification and remediation technology to data in motion via our APIs. When companies are generating or preparing datasets to train their own models, they need a way to ensure that data is free of PII or other sensitive data. So we plug right into a data pipeline and can scrub datasets as they are moved or copied into a training set, ensuring those models are never at risk of outputting sensitive data.
Second, we see our core product as an enabler of AI adoption. Every company is under pressure to utilize AI tools to operate more efficiently and to keep up with the market. A great example of this is M365’s Copilot, which provides a smart search capability and makes it easier to find files or data. But these tools by definition make it easier to find sensitive data too, and so we have a lot of companies who are coming to us saying “we need to implement this AI tool, but we’re scared what it will surface.” They need Teleskope to come in, scan their environment and automatically enforce their data management and security policies, so they can adopt AI with confidence.
Finally, we are actually building integrations for AI tools, that will redact or quarantine prompts that contain sensitive data before they can be leaked into public AI tools like ChatGPT. Tons of companies just ban the use of these tools, but there is a way to adopt them safely so that you can ensure no sensitive data (as defined by each company) is exfiltrated.
Redaction and “remediation at the source” are core to your approach. What’s your philosophy on auto-remediation vs. human-in-the-loop, and where do you draw safety boundaries?
We realized a couple of years ago that, as needed as data discovery and classification have been, they only provide half the story. Finding data risk is the first step, but resolving and remediating that risk is the actual end goal. Our customers usually start by evaluating Teleskope’s findings in our data catalog, then move to remediating with a human-in-the-loop before moving to fully automated remediation. We’re very aware that in reality, there will always been some actions that teams will never be 100% comfortable automating completely. Deleting data from a production database, for example, could be very problematic. But in many cases we are seeing customers adopt full automation for things like revoking permissions, for moving data around, enforcing archiving or retention policies, etc.
Many DSPM/DLP tools struggle with real-time protection. What had to change architecturally to make “real-time” table stakes, and what latencies and throughput can enterprises expect in production?
In order to tackle the real time problem, it was important to break down the task into its core components. Different situations require different types of latencies, but the goal is always to provide the most accurate insight the quickest possible way. This means that a flexible architecture that would allow us to parallelize our low latency system to accommodate different throughput requirements. When an enterprise has Teleskope running in their environment, data is being classified and protected right in their infrastructure reducing latency and outbound data flow. This very fact allows us to provide remediation in high risk scenarios in sub-second time frames.
Privacy and compliance: you claim continuous monitoring and automatic mapping to frameworks/regulations. How do you keep mappings current as laws evolve, and how customizable are controls for different regions or business units?
Candidly, our focus has really shifted away from checking boxes and towards deeply understanding what our customers care about. In some cases, they want to map 100% to the newest regulations coming out, and we are constantly monitoring these changes and incorporating them into our product. But, truthfully, most companies are so far away from being able to comply with these laws fully, that we have to meet them where they are and make sure we can get them from point A to B to C before we worry about getting to Z. The way we do this is by first understanding what compliance means to that company (again, a manufacturing company may not see something like GDPR as a major concern), and ensuring that we can shape the product around their specific risk profiles and needs.
GenAI adoption: how are customers using Teleskope to create “safe inputs” and “safe outputs” without degrading developer velocity? Any patterns you recommend?
Customers integrate Teleskope’s Redact API into their training and inference pipelines to ensure sensitive data never flows to generative AI models. The redaction process is abstracted away from developers, preserving development velocity by performing redaction before inference and rehydrating the data afterward.
Looking ahead, you’ve talked about an end-to-end “agentic” data security platform with autonomous remediation. What milestones will signal the industry is ready for fully autonomous data protection?
We know for a fact that the industry is ready for this. Other areas of cyber, like SOC, have already shown a complete shift towards agentic AI as a means of scaling the capacity of security teams. We have a queue of customers who are asking to be design partners for this work, so we know that a lot of companies are feeling the same pain of having to still manually triage, investigate, come to a decision and then execute, just to resolve a single ticket. We have absolute conviction that this is where the market is going, and we’re determined to lead that shift.
Thank you for the great interview, readers who wish to learn more should visit Teleskope.












