# "prefiller" and "decoder" backend servers for large language model inference. # It is useful for scaling out inference workloads and balancing load across # multiple backend instances. # Features: # ...
sentinel mcp-proxy --block-on critical -- your-server # Only block critical risks sentinel mcp-proxy --block-on low -- your-server # Block everything suspicious ...