Recently, HazyResearch introduced a simple communication protocol for integrating local and cloud LLMs.
The core idea behind the protocol is to maximize the use of local LLM models with local data, minimizing cloud API costs while maintaining high-quality outputs. The protocol comes in two flavors:
* Minion: A local LLM model