An AI-native shell
for your desktop.
Wylde is a local-first operating environment for AI on your own hardware. Twenty native services — voice, retrieval, training, orchestration, graph — all running together, all on your machine, no cloud round-trip.
One install. Twenty services. Zero cloud.
Wylde bundles inference, retrieval, training, voice, automation, and graph storage as a single coordinated platform — every piece running natively on your machine.
Wake-word, STT, and TTS run on the Intel NPU at ~5W. 48 built-in intents, custom YAML commands, voice profiles, and a feedback loop that gets better over time.
Agent Orchestra runs a 15-stage coding workflow with planner, architect, test writer, debugger, adversarial critic, and an experiential learning store that remembers prior lessons.
Vector + full-text RAG with HyDE, citation, and graph-enhanced retrieval. Backed by an embedded knowledge graph (Neo4j-compatible) so you can traverse relationships, not just search blobs.
A complete training studio: dataset workshop, captioner (Florence-2 / Qwen2.5-VL), config builder, live SSE loss curves, eval lab, and model gallery. Cooperative VRAM scheduling across services.
Loopback-only by default. No accounts, no telemetry, no download tracking. Optional remote access via Tailscale with per-device approval.
Services talk over Windows named pipes with a msgpack wire format. Hot-path latency is ~0.1 ms — about 15× faster than HTTP loopback. HTTP fallback included for debugging.
Built around three rules.
Wylde is the AI environment we wanted for ourselves. The defaults reflect that.
Local-first
Your models, your data, your hardware. Inference runs on the same machine as the GUI. The cloud is optional, never assumed, never required for the platform to work.
Privacy by default
Every service binds to 127.0.0.1. Nothing reaches the network unless you turn it on. Remote access is opt-in, gated by a per-device approval flow.
Open architecture
Services self-register, tools are added in plain Python, workflows are YAML you can read. No magic — just a clean interface and good defaults you can change.
A platform, not a chat box.
Wylde behaves like a small distributed system on a single machine — twenty cooperating services, a discovery layer, a unified gateway, and a desktop app that drives all of it.
- DesktopTauri 2 + Svelte 5 (Fletch GUI)
- BackendPython / Flask + FastAPI microservices
- IPC
\\.\pipe\wylde-*(msgpack, ~0.1 ms) - InferenceOllama, OpenVINO NPU, CUDA 12.8
- GraphNeo4j-compatible (Bolt + Cypher)
- DiscoverymDNS (zeroconf), optional Consul
# Install + run — Wylde launches its own native services. $ launch_wylde.bat # Health check across the whole platform. $ curl http://127.0.0.1:5000/health { "status": "healthy", "healthy_services": 20, "active_tools": 62 } # Talk to the orchestrator over the pipe. $ wylde ask "summarise today's commits"
Get Wylde on your machine.
Windows 11 build available now. macOS & Linux builds are on the roadmap.
Build link is a placeholder while we tag the first public release.