Connect with us

Thought Leaders

LLMs and MCP Servers: A New Blueprint for Secure AI in Remote Access

mm

A growing number of organizations are embracing Large Language Models (LLMs). LLMs excel at interpreting natural language, guiding troubleshooting, and automating repetitive, routine tasks that slow down administrators. When an AI assistant can take an instruction such as “connect me to the primary Linux cluster and check failed logins,” and immediately execute fully orchestrated actions, the efficiency and productivity gains are undeniable.

As part of this trend, LLMs are finding their way into some of the most sensitive corners of IT operations, including tools that teams rely on to manage remote connections and privileged access across hybrid, cloud, and on-prem environments. Remote access systems sit at the nexus of trust, identity, and operational control. They manage administrator sessions, broker authentication, and connect sensitive workloads to the people responsible for keeping them running.

Why AI Needs a Mediating Layer in Remote Access

This extension of LLMs into privileged workflows is convenient, but it is also problematic. To run a command or connect to a host, some AI tools simply retrieve credentials and pass them through the LLM for use downstream. This is an expedient shortcut but also a potentially dangerous one. If a model receives passwords or keys, then the entire privilege boundary collapses. The organization loses control over credential governance, auditability becomes unreliable, and the LLM becomes a new, opaque actor with access to the heart of the environment.

In addition, models can be influenced by manipulated inputs, making credential exposure even riskier. On top of this, LLMs’ appetite for contextual data renders them as risky companions for systems that guard keys, tokens, and administrative pathways. Ultimately, LLMs (and the associated AI tools and models that leverage them) can be incredibly helpful, but they should never be allowed to hold or handle secrets. They are simply not mature enough to be entrusted in this way.

In light of these concerns and vulnerabilities, a central question now looms for CIOs, CISOs, and operations leaders: How do we enable and position LLMs to help us, but without letting them get too close to our privileged workflows?

Fortunately, an answer is emerging that is turning architectural susceptibilities into strengths: Model Context Protocol (MCP) servers.

MCP Servers: Reshaping How LLMs Interact with Infrastructure

MCP servers act as secure intermediaries – effectively an AI “airlock” – that allow LLMs to request actions, but without ever touching the credentials or privileged pathways that those actions require. As organizations push deeper into AI-assisted operations, MCP-style approaches are emerging as the blueprint for safe, scalable integration.

MCP servers introduce a separation of concerns that many security architects have long argued is essential: the AI assists, but a controlled system executes. Instead of giving the LLM authority to act directly, the model is limited to expressing intent (e.g., “connect here,” “collect logs,” “check this policy”) while the MCP server interprets these requests, applies policy, and routes them through vetted tools. Importantly, this approach aligns with the principles described in the NIST AI Risk Management Framework, which emphasizes tool boundaries, mediated permissions, and human-controlled escalation.

What makes this design especially impactful is that the LLM never receives privileged material. Authentication is handled internally through secure credential injection. As a result, the LLM only sees outcomes, never the secrets themselves. The LLM can describe what happened, help triage issues, and guide a human through next steps, but it cannot authenticate on its own.

Security research increasingly emphasizes that the transport layer between AI models and local tools is a critical part of the attack surface. For example, the OWASP’s Top 10 for LLM Applications highlights how insecure plugin interactions – especially those exposed through open localhost HTTP endpoints – can allow untrusted local processes to trigger privileged actions. MCP-style architecture avoids this by relying on OS-enforced, user-scoped channels such as named pipes, which provide stronger isolation. This approach aligns with ENISA’s broader warnings about insecure AI attachment points and the risks they introduce in high-privilege environments.

Another key advantage of MCP servers is the ability to execute actions inside remote sessions. By using secure virtual channels or equivalent mechanisms, MCP servers can perform operations directly within RDP or SSH environments, yet without relying on brittle, MFA-bypassing scripts. This approach combines convenience with governance: administrators get powerful automation, but without sacrificing Zero Trust principles.

Together, these characteristics redefine what “safe AI integration” looks like. Instead of wrapping AI around sensitive systems, organizations place a hardened layer in between, defining what AI is allowed to ask for and receive – and just as importantly, what it is never allowed to see.

Operational Benefits of LLM + MCP Architectures

The operational payoff of this design is significant. By mediating AI through MCP, IT teams can orchestrate environment setup, configuration standardization, and multi-session tasks using simple natural language. This has the potential to significantly cut the time between problem identification and resolution; especially in hybrid environments where context switching typically slows everything down.

These improvements also align with broader industry forecasts and recommendations. Gartner points to LLM-assisted IT operations as a major accelerator for hybrid infrastructure management, helping teams work faster without sacrificing governance. The model analyzes logs, summarizes complex datasets, and guides humans through troubleshooting steps – all while the MCP layer ensures every action is compliant and traceable.

The result is not just greater speed but stronger governance. When an LLM consistently routes tasks through the same hardened pathways, organizations discover reliable audit trails, reproducible workflows, and clear attribution between human and AI activity. Logs include prompts, tool calls, session details, and policy references – all of which give compliance teams the transparency they increasingly need and expect in AI-driven environments.

There are also cultural benefits of this approach. By “offloading toil” (e.g., log review, repetitive checks, mundane administrative steps, etc.), IT teams can shift their energy and focus toward higher-value work. This can often improve both efficiency and morale; especially in operations groups that are stretched thin by hybrid infrastructure sprawl.

Lastly, since MCP architectures can support multiple LLMs, organizations are not forced to deal with a single provider. They can choose commercial, open-source, or on-prem models, depending on regulatory needs and data governance preferences.

Security Risks That Still Need Attention

While the benefits we have explored are substantial – and in some respects transformative – it is necessary and responsible to point out that even with a secure mediation layer, LLM-assisted environments are not risk-free. There are four lingering concerns to highlight:

  • As noted earlier, prompt injection – both direct and indirect – remains one of the biggest concerns, and continues to be one of the most extensively-documented attack classes against LLMs.
  • Metadata exposure is another concern. Although MCP servers shield credentials, unless teams enforce strong data-minimization practices, prompts and responses can still leak hostnames, internal paths, and topology patterns.
  • MCP-based systems add new machine identities: tool servers, virtual channels, agent processes. According to industry research, machine identities vastly outnumber human identities in many organizations, and the mismanagement of these identities is a growing source of breaches.
  • Finally, the AI supply chain cannot be ignored. Model updates, tool extensions, and integration layers require ongoing validation. Analysis by the ENISA stresses that AI systems introduce a broader and more fragile supply chain than traditional software stacks.

The Next 12 Months: A Practical Path Forward

Organizations exploring LLM-driven automation in privileged environments should view MCP-style mediation as the expected baseline. Over the next year, leaders can take several practical steps that include:

  • Establish an internal governance model defining which LLMs are approved and what data they may access.
  • Ensure that all AI-driven privileged actions route through an MCP-like layer rather than interacting with credentials directly.
  • Integrate AI-initiated workflows into existing PAM frameworks.
  • Adopt policy-as-code to define and test tool boundaries.
  • Prioritize data minimization.
  • Incorporate AI-specific red teaming focused on prompt manipulation, model behavior, and local interface hardening.

The Final Word

LLMs are reshaping remote access and privileged operations, offering new levels of speed, guidance, and automation. However, safely unleashing this potential requires a disciplined architectural approach: one that places a secure, auditable mediation layer between AI models and sensitive systems. MCP servers provide this structure. They allow AI to help without “handing it the keys,” merging innovation with governance in a way that aligns with modern Zero Trust expectations.

For organizations looking to responsibly and profitably harness AI, MCP-style designs represent a practical, forward-looking blueprint – one where LLMs amplify human expertise, rather than unintentionally yet invariably compromise the security of privileged access and workflows.

As president and CEO of Devolutions, David leads the company’s corporate strategy and oversees product development with a focus on innovation, security, and usability. After founding Devolutions in 2004 as a software consulting firm, David shifted the company’s focus in 2010 to develop powerful, user-friendly IT solutions. Today, Devolutions supports over 1 million users in more than 140 countries and is recognized as a trusted leader in privileged access management and IT security for SMBs. Reflecting on the company’s journey, David credits its success to his deep expertise in software architecture, his entrepreneurial drive, and a relentless commitment to customer satisfaction.