Mastodon Feed: Post

Mastodon Feed

Boosted by ratatui_rs@fosstodon.org ("Ratatui"):
orhun@fosstodon.org ("Orhun Parmaksız 👾") wrote:

New TUI dropped for managing LLM traffic and GPU resources 🔥

🌀 **ollamaMQ** — Async message queue proxy for Ollama

💯 Per-user queues, fair-share scheduling, OpenAI-compatible endpoints, streaming

🦀 Written in Rust & built with @ratatui_rs

⭐ GitHub: https://github.com/Chleba/ollamaMQ

#rustlang #ratatui #tui #gpu #llm #ollama #backend #proxy #terminal

Attachments: