Generative AI with Local LLMs
  • The Book
  • Home
  • About
Sign in Subscribe

Llama 3.2

A collection of 3 posts
Minion(s): A Simple Protocol for Communicating with Local and Cloud LLMs
Minion

Minion(s): A Simple Protocol for Communicating with Local and Cloud LLMs

Recently, HazyResearch introduced a simple communication protocol for integrating local and cloud LLMs. The core idea behind the protocol is to maximize the use of local LLM models with local data, minimizing cloud API costs while maintaining high-quality outputs. The protocol comes in two flavors: * Minion: A local LLM model
16 Mar 2025 18 min read
How to Use Retrieval-Augmented Generation (RAG) locally
rag

How to Use Retrieval-Augmented Generation (RAG) locally

In this blog post, we'll explore how to use Retrieval-Augmented Generation (RAG) for building more effective and engaging conversational AI applications. We'll cover the basics of RAG, its benefits, and provide step-by-step instructions on how to develop your own RAG mechanism for local use. What is
12 Nov 2024 6 min read
How to Use Llama 3.2 Vision Models-part-1
ai

How to Use Llama 3.2 Vision Models-part-1

How to Use Llama 3.2 Vision Models: From Local Inference to API Integration, part 1 Llama 3.2, the latest iteration of the LLaMA series, brings enhanced multimodal capabilities, including a powerful vision model. Whether you're processing images for analysis, generating visual content, or building AI-driven applications,
16 Oct 2024 5 min read
Page 1 of 1
Generative AI with Local LLMs © 2025
  • Sign up
Powered by Ghost