LIVE THREATS
MEDIUM Cross-Machine AI Agent Relay Tool Expands Attack Surface for Developer Environments // HIGH Desktop Automation CLI Grants AI Agents Deep OS-Level Control // HIGH Frontier LLMs Now Autonomously Breach Corporate Networks in AISI Cyber Tests // HIGH Premature AI Agent Deployments Expose Production Systems to Destructive Actions // HIGH Anthropic Launches Claude Security to Close AI-Accelerated Exploit Window // CRITICAL CVSS 10 Gemini CLI Flaw Turns CI/CD Pipelines Into RCE Attack Vectors // MEDIUM OpenAI Launches Phishing-Resistant Security Mode for High-Risk ChatGPT Accounts // HIGH UK AI Security Institute Finds GPT-5.5 Matches Claude Mythos in Cyber Capabilities // MEDIUM AI-Powered Honeypots Expose Blind Spots in Automated Malicious AI Agents // HIGH DPRK Actors Use Claude LLM to Inject Malware Into npm Supply Chain //
ATLAS OWASP MEDIUM Moderate risk · Monitor closely RELEVANCE ▲ 6.5

Cross-Machine AI Agent Relay Tool Expands Attack Surface for Developer Environments

TL;DR MEDIUM
  • What happened: Loopsy relays AI agent commands and shell access across machines via a self-hosted Cloudflare Worker.
  • Who's at risk: Developers using AI coding agents (Claude Code, Cursor, Codex) who deploy Loopsy are exposed to relay hijacking, prompt injection via mobile input, and lateral movement if the relay is compromised.
  • Act now: Audit network exposure of any Cloudflare Worker relay before deploying Loopsy in production or sensitive environments · Restrict shell command scope accessible via the relay using allowlists and sandboxing · Treat mobile-originated inputs to AI agents as untrusted and apply prompt injection defences before execution
Cross-Machine AI Agent Relay Tool Expands Attack Surface for Developer Environments

Overview

Loopsy is an open-source developer tool (GitHub: leox255/loopsy) that enables cross-machine communication between AI coding agents — including Claude Code, Cursor, and OpenAI Codex — and mobile devices. The system uses a self-hosted relay on Cloudflare Workers to broker terminal commands and AI agent interactions from a smartphone to a developer’s laptop. While framed as a productivity enhancement, the architecture represents a meaningful expansion of the attack surface surrounding agentic AI workflows.

As AI coding agents gain autonomous shell access and the ability to execute code, tools that extend their reachability across network boundaries deserve close security scrutiny.

Technical Analysis

Loopsy’s architecture consists of three components:

  1. A laptop-side daemon — installed globally via npm install -g loopsy, it exposes terminal and AI agent interfaces to the relay.
  2. A Cloudflare Workers relay — self-hosted by the user, acting as the broker between mobile and laptop. Commands and responses are tunnelled through this relay.
  3. A mobile app — sends instructions to the relay, which forwards them to the AI agent or shell on the target machine.

From a security perspective, several risks emerge:

  • Relay as a single point of compromise: If the Cloudflare Worker is misconfigured, lacks authentication, or is targeted via a supply chain attack on the npm package, an attacker gains a pathway to issue arbitrary shell commands or inject instructions into AI agent sessions.
  • Prompt injection via mobile input: Any text entered via the mobile app and forwarded to an AI agent (e.g., Claude Code) could carry injected instructions if the input originates from an untrusted or attacker-controlled source.
  • Excessive agency: AI agents operating in agentic mode with shell access, now controllable from a mobile device over a network relay, represent a textbook case of excessive agency — a broad action envelope with limited contextual guardrails.
  • npm supply chain risk: The global npm install and a separate deploy package (@loopsy/deploy-relay) introduce supply chain dependency risks. Malicious package versions could backdoor both the relay and the local daemon.

Framework Mapping

  • AML.T0051 (LLM Prompt Injection): Mobile-sourced inputs forwarded to AI agents without sanitisation are a vector for prompt injection.
  • AML.T0010 (ML Supply Chain Compromise): The npm-distributed daemon and relay packages are potential supply chain targets.
  • LLM08 (Excessive Agency): The tool explicitly extends AI agent action scope across machine boundaries via a network relay.
  • LLM07 (Insecure Plugin Design): The relay-to-agent integration lacks documented input validation or sandboxing controls.

Impact Assessment

Developers running AI coding agents with shell access in corporate or sensitive environments are most at risk. A compromised relay could enable lateral movement, data exfiltration from the development environment, or injection of malicious code into AI-assisted workflows. The impact is elevated by the tool’s design goal: seamless, low-friction remote control.

Mitigation & Recommendations

  • Enforce relay authentication: Ensure the Cloudflare Worker requires strong authentication tokens; do not expose it publicly without access controls.
  • Scope-limit shell access: Use allowlists to restrict which commands AI agents can execute when invoked via the relay.
  • Sanitise all mobile inputs: Treat inputs from the mobile app as untrusted; apply prompt injection defences before passing to any AI agent.
  • Pin npm dependencies: Lock and audit the loopsy and @loopsy/deploy-relay packages to prevent supply chain substitution.
  • Network segmentation: Avoid deploying Loopsy on machines with access to production systems or sensitive credentials.

References

◉ AI THREAT BRIEFING

Stay ahead of the threat.

Twice-weekly digest of critical AI security developments — every story mapped to MITRE ATLAS and OWASP LLM Top 10. Free.

No spam. Unsubscribe anytime.