CVMI the swiss army knife CLI for ContextVM

CVMI is becoming the command line for that world. With the new `discover` and `call` commands, CVMI is no longer only a supporting tool around the ecosystem. It is becoming a direct way to explore the network, inspect remote servers, debug them from the terminal, and integrate ContextVM services into scripts, apps, and agent workflows.
CVMI the swiss army knife CLI for ContextVM

ContextVM is built on a simple idea: remote computation should be easy to publish and easy to use without depending on the usual stack of domains, hosted infrastructure, and OAuth-heavy integration.

CVMI is becoming the command line for that world.

With the new discover and call commands, CVMI is no longer only a supporting tool around the ecosystem. It is becoming a direct way to explore the network, inspect remote servers, debug them from the terminal, and integrate ContextVM services into scripts, apps, and agent workflows.

Why this matters now

The CLI is having a comeback, and for good reason.

A command line is one of the few interfaces that works equally well for users, and agents. It is explicit, scriptable, composable, and easy to debug. That makes it a natural fit for modern tooling, especially as more people build workflows around agents and automation.

This is also why interest in CLI-first patterns around MCP is growing. Instead of loading every possible tool definition up front, a CLI lets you discover what is available and inspect details only when needed. That can make workflows lighter, cheaper, and easier to compose. One recent example of this trend are blog posts like CLI vs MCP: https://kanyilmaz.me/2026/02/23/cli-vs-mcp.html, developments like MCPorter: https://github.com/steipete/mcporter/blob/main/README.md, or even Google with gws: https://github.com/googleworkspace/cli.

ContextVM fits this moment unusually well because it already removes many of the assumptions that make remote tooling harder than it needs to be. It does not depend on OAuth or involves more third parties than public Nostr relays. It is secure by default, permissionless, and based on key-driven identity and relay-based reachability.

Discover services directly from the network

cvmi discover makes public ContextVM services easier to find.

Instead of relying on docs, directories, or manual endpoint sharing, you can query relay announcements directly from the terminal:

npx cvmi discover

That is useful for anyone. Users and agents can browse what is available. Developers can verify that their servers are visible. Builders can inspect the public surface of the network without extra tooling.

Discovery becomes part of the protocol experience itself, not something bolted on from outside.

Call remote tools without extra ceremony

cvmi call handles the next step: actually using a remote service.

You can inspect a server:

npx cvmi call <server>

inspect a specific tool:

npx cvmi call <server> <tool> --help

and invoke it directly:

npx cvmi call <server> <tool> key=value

This is powerful because it serves several roles at once.

For developers, it is a practical debugging surface for remote servers. You can list exposed tools, inspect expected inputs, try requests manually, and read results without building a custom client first.

For agents and automations, it is a narrow and scriptable interface that fits naturally into shell workflows.

For users, it is a simple way to interact with a remote service from the terminal.

And this is only part of what CVMI is becoming, it already exposes commands such as serve, use, and add, which means the same CLI can help you expose a server over Nostr, connect a remote service locally, or install ContextVM skills alongside discovery and direct tool calls.

Why CVMI fits ContextVM so well

These new commands matter because they make ContextVM operationally simple.

ContextVM already had the ingredients for a great open model of remote computation: no mandatory third parties involved, permissionless participation, relay-based discovery, and the ability to deploy from small setups instead of needing full hosted infrastructure from day one.

CVMI now gives that model a better interface.

You can discover services on the network, inspect them from the shell, call them directly, and use the same workflow for debugging, personal use, scripting, or agent integration. That lowers the barrier for everyone who wants to publish tools, experiment with them, or build on top of them.

In practice, this means you can deploy from the garage and still be reachable worldwide. You can expose tools without building a full traditional and permissioned web stack. You can consume those tools in the way that suits you best: from a terminal, from edge scripts, or inside agent workflows.

That is what makes this release important. It is not just two more commands. It is ContextVM becoming easier to use in the real environments where people already work.

Start from here

The easiest way to see where CVMI is headed is to run it.

npx cvmi

If you plan to use it often, install it globally and keep it nearby as part of your everyday toolkit:

npm install -g cvmi
cvmi

From there, you can discover public services with cvmi discover, call remote tools with cvmi call, expose an already existing MCP server with serve (stdio or http) through Nostr, connect to remote server locally with use, or install ContextVM skills with add. In one CLI, CVMI is starting to cover the path for learning, debugging, publishing, and integration.

Further reading


Yes, it is a CLI for CVM which puts MCP over Nostr, so you can effectively call remote MCPs through Nostr using this CLI

Is this a cli wrapper over MCP then?