Connect with us

Interviews

Sterling Wilson, Field CTO at Object First – Interview Series

mm

Sterling Wilson, Field Chief Technology Officer (CTO) at Object First is a technology strategist with a unique perspective gained as an infrastructure engineer in public and private sectors. Leveraging experience in technical and leadership roles at data security / management and storage companies, he brings an end-to-end perspective to protecting and maximizing the value of data. Engaging with the greater IT community, Sterling works to elevate data resilience adoption through secure-by-design architecture, thought leadership, and practical education across the industry.

Object First is a company that develops backup storage appliances purpose-built for Veeam, with its flagship product called Ootbi (“out-of-the-box immutability”). These appliances are designed to be ransomware-proof, featuring built-in immutable storage that prevents backups from being altered or deleted. They are also engineered for simplicity and speed, with deployment possible in as little as 15 minutes, while delivering high performance for backup and recovery operations.

How did you first get involved in managing infrastructure or backup systems for critical data environments, and how has your perspective evolved with the rise of AI?

I first became involved with managing infrastructure and backup systems as a Microsoft Infrastructure consultant in Washington, D.C., where I built and maintained everything from domain controllers and Exchange servers to file servers and the data stored on them. At that stage, things were relatively straightforward with tape backups and offsite copies being the standard practice.

My perspective shifted during my years as a Virtualization Architect at the Social Security Administration, where we saw the first major evolution in backup. As environments became increasingly virtualized, data itself became central to architecture, essentially, the environment was the data and vice versa, leading to an explosion in datasets and changing how we accessed the source data for backup. Cloud adoption extended these practices, but it wasn’t until the rise of today’s threat actors and the rapid growth of AI that a new set of challenges truly emerged.

Threat actors now target backups directly to eliminate recovery options. At the same time, AI is being used both to exploit data for financial gain and to strengthen data resilience in the face of attacks. AI has also introduced new priorities around protecting the entire data pipeline from raw sources and feature stores, to training infrastructure, model artifacts, and registries, causing our industry to rethink the traditional data silos to ensure resilience at every step of the AI lifecycle.

In your view, what defines a modern, fully developed disaster recovery plan—not just in theory, but in practical, day-to-day operations?

A modern, fully developed disaster recovery plan should incorporate six critical components. They include involving the right people, matching risks and measures, prioritizing assets, defining timelines, configuring backups, and enforcing testing and optimization. These six factors ensure the disaster recovery plan works as intended in the event of a crisis.

The first component of the plan is making sure that the right people have the right roles. Each member should have clear responsibilities in the recovery process. There should also be a clear line of communication between all team members, including vendors and customers. The next component is a full list of risks and consequences. This asset should outline incidents an organization might suffer and prepare a step-by-step recovery protocol for each, defining roles, actions, and tools.

Prioritizing which assets are crucial for business continuity as well as ranking them in order of importance. Each asset should have a clear recovery protocol, and each team member should understand their recovery process. When thinking about business continuity strategies it’s important to consider how much downtime and data loss a company can reasonably handle. Always referring to the recovery point objective and recovery time objective will help leaders stay consistent with their goals.

The last two components pivotal for an effective disaster recovery plan are configuring backups and enforcing testing optimization. Once the team understands the timeline and is on track for recovery, it is time to configure backups. This entails choosing backup modes, locations, and frequency, defining recovery speed, and appointing people responsible. Lastly, performing regular tests will ensure that when disaster strikes companies are prepared.

With the increase in AI-generated data, are you seeing a shift in how businesses prioritize what gets backed up—and how often?

As more companies adopt AI and manage the vast amounts of AI-generated data that business continuity depends on, these “crown jewels” need additional protections and will begin to be prioritized over other data. Traditionally, newer workloads would be an afterthought for backups, but AI brings new data to the forefront of the backup conversation. Surprisingly there is an AI data backup gap. Many organizations are not doing enough to protect their AI-generated data. 65% of organizations are only backing ~ 50% of their total volume of AI-generated data regularly.

In the future, we are likely to see a shift towards more AI data being backed up. As more businesses begin to realize the importance of AI in their operations, they will begin to develop large amounts of data as intellectual property and begin to understand the major value it holds. Security teams will also realize how big a security risk it is leaving this type of data unprotected. The loss of this data is not only devastating from a security aspect but also relinquishes competitive advantage and business continuity.

What are the most common gaps you see in organizations’ current backup strategies, especially when it comes to handling unexpected disruptions like floods or hardware failures?

One of the most effective strategies in achieving data resiliency during a disruption is eliminating single points of failure and ensuring backups remain immutable and recoverable, no matter the disaster. With a threat actor or attack, immutable backups are essential to achieving a comprehensive protection strategy using recommended industry solutions that are simple and easy to deploy. Immutable backups are unchangeable and resistant to modification. These backups provide a clean and secure copy of the data. In its simplest definition, immutability ensures that data cannot be altered or deleted once recorded, providing a secure means of safeguarding critical data. Even if production and data backup systems or access controls are compromised, the data remains safe. This can only be achieved using a backup storage system that is “secure-by-design” with Zero Access to destructive actions, and this Zero Access must be verifiable with third party testing. To ensure this immutable data remains recoverable in all scenarios, including flood and hardware failures, a 3-2-1 backup strategy is fundamental, ensuring that you have copies of your data on different secure systems, with on-prem immutable storage being the fastest means to recovery.

For companies investing heavily in AI infrastructure, what considerations should they make when deciding between on-premises, hybrid, and cloud-first backup solutions?

As AI increasingly becomes the backbone of many companies’ infrastructures and operations, one of the most important things to consider is the correct storage system to safely and securely store the data that AI tools and applications are generating. It can be tricky to decipher which option is best between on-premises and cloud-first backup solutions, but a hybrid approach is the best way to support AI infrastructure. A hybrid solution presents an agile and adaptable solution that will promote AI success and help drive meaningful business outcomes.

The Hybrid solution offers both scalability of the cloud and the control that on-premises provides. Hybrid models also allow workloads to be adjusted as the needs of the organization evolve. This is especially important for AI infrastructure as it is likely that AI workloads will increasingly demand more support. Hybrid storage also combines local and cloud backup options to provide an extra layer of protection. Typically includes a physical backup device on-premises that backs up data to the cloud. Compliance requirements can also be met with hybrid data. It allows sensitive data to be kept on-site for fastest access and recovery while still being able to offload data to the cloud for additional protection and redundancy.

How important is geographic redundancy in backup and disaster recovery planning today, especially with the growing risk of extreme weather events?

Geographic redundancy refers to the replication of data in two separate geographic locations. Data is stored in a primary location and then replicated to a secondary region in case of a catastrophic failure in the primary region, such as a natural disaster or data breach. A failover to the secondary location is then triggered to keep ensure business continuity. Data is accessed from the secondary location seamlessly to users with little downtime.

Geographical redundancy allows for data and critical applications to remain active and recoverable even when disaster strikes. Cloud storage also accounts for redundancy requirements and can save data in more than one location; however, cloud storage does not offer the fast recovery may organizations call for today. Leveraging on premises immutable storage as the primary backup location, tiering to the cloud for the secondary copy provides an extra layer of redundancy that gives IT teams peace of mind, knowing that data is securely stored and organizations can rapidly switch to a secondary region, if necessary. For organizations that cannot sustain long amounts of downtime and heavily rely on data availability, geographic redundancy is an essential option.

Could you walk us through how immutable backups and object storage are being used to protect against ransomware or accidental data corruption?

Immutable backups are the key to both ransomware, and general disaster recovery. It presents the most formidable line of defense, providing reliable fallback in the event of a data breach. Immutable backups are the ultimate choice for ensuring recovery after an attack. The immutable nature means data cannot be encrypted or altered by ransomware, preserving its original state. Even if the network is compromised, immutable backups remain unaltered, providing reliable data recovery.

Immutable backups ensure that backup data remains unchanged from the moment it is written to prevent unauthorized alterations, maintain data integrity, and defending against ransomware and encryption attacks. The proven and most secure way to achieve this is through S3 versioning combined with Object Lock, which enforces immutability at the time an object is created in the storage system. This eliminates the risk of tampering, malware injection, or deletion—even in the event of insider threats or credential compromise. In contrast, legacy storage solutions, designed in the pre-ransomware era, lack native immutability in their core architecture.

What role should backup systems play in responding to cybersecurity incidents? Is that response different when the affected data supports AI-driven services?

Following a cybersecurity incident such as a ransomware attack it is important to ensure you have a strong and effective ransomware strategy, especially since no one is immune to ransomware attacks. Data backup strategies are imperative and there are several ways backups ensure critical data remains secure and recoverable. By maintaining up-to-date copies or data, the impact of ransomware can be significantly reduced. Duplicating vital information and storing it offline or offsite ensures restoration, even if on-site backups are compromised. One of the most important steps is securing your backups by isolating them from the network and restricting access to backup systems until the infection is eradicated.

When it comes to AI workloads, especially ones involving model training and inference logs, what kinds of data need to be prioritized in a recovery scenario?

When considering data prioritization for AI workloads, the first step is to consider the quality of the data. Even though LLMs are trained on large data sets, if that data is not quality, clean data then AI outputs will not be effective. There are several factors to determine the quality of the data: accuracy, consistency, completeness, relevance, and reliability. However, this may still present a challenge as organizations may be forced to choose which data is prioritized in a disaster recovery scenario.

The second step is identifying which quality data should be prioritized in recovery to meet recovery objectives. These objectives should highlight the goals and metrics that will determine the success of the recovery. The recovery point objective (RPO) is an important metric because it decides the maximum amount of data loss an organization can handle. The recovery time objective (RTO) establishes the maximum amount of time an organization can tolerate restoration. These objectives also need to align with business priorities. Once the goals are outlined the quality data needs to be classified based on its dependency. To determine this, think about what data will promote business continuity.  After all these steps are taken, there should be a clear set of quality, dependent data that aligns with recovery objectives and establishes business continuity that should be prioritized in a recovery scenario.

How do you help organizations balance the cost of robust backup and recovery systems with the urgency of being prepared for low-probability but high-impact disasters?

A cost-benefit analysis can help organizations weigh the costs of various backup and recovery strategies against potential costs of data loss and downtime from a disaster or crisis. This analysis requires identifying critical systems and data, setting RTOs and RPOs, and then assessing how backup strategies align with these needs, measuring the return on investment (ROI) through a comparison of costs and savings from minimized disruptions. Once the risk factors have been balanced and a backup system has been chosen, it is critical to regularly run this analysis to confirm that the organization’s backup and disaster recovery requirements have not changed. If they have, then reevaluate backup strategies to ensure that priority data is being protected and secured in the wake of a natural disaster or security incident.

If you could recommend just one improvement that most companies should make to their disaster recovery strategy right now, what would it be—and why?

One improvement I’d recommend is separating your backup software from your backup storage. Too often, companies run both in the same environment or appliance, which creates a single blast radius—if the software layer is compromised, the storage goes down with it. By isolating backup storage using third party tested and approved solutions, you dramatically reduce the attack surface and align with Zero Trust principles. This separation ensures that even if an attacker gains control of your production or backup management systems, your actual backup data remains out of reach. It’s a simple architectural change, but it makes the difference between a total loss and a fast, confident recovery.

Thank you for the great interview, readers who wish to learn more should visit Object First.

Antoine is a visionary leader and founding partner of Unite.AI, driven by an unwavering passion for shaping and promoting the future of AI and robotics. A serial entrepreneur, he believes that AI will be as disruptive to society as electricity, and is often caught raving about the potential of disruptive technologies and AGI.

As a futurist, he is dedicated to exploring how these innovations will shape our world. In addition, he is the founder of Securities.io, a platform focused on investing in cutting-edge technologies that are redefining the future and reshaping entire sectors.