Generative AI with Local LLMs
  • The Book
  • Home
  • About
Sign in Subscribe

Llama

A collection of 3 posts
How to Use Retrieval-Augmented Generation (RAG) locally
rag

How to Use Retrieval-Augmented Generation (RAG) locally

In this blog post, we'll explore how to use Retrieval-Augmented Generation (RAG) for building more effective and engaging conversational AI applications. We'll cover the basics of RAG, its benefits, and provide step-by-step instructions on how to develop your own RAG mechanism for local use. What is
12 Nov 2024 6 min read
How to Use Llama 3.2 Vision Models-part-1
ai

How to Use Llama 3.2 Vision Models-part-1

How to Use Llama 3.2 Vision Models: From Local Inference to API Integration, part 1 Llama 3.2, the latest iteration of the LLaMA series, brings enhanced multimodal capabilities, including a powerful vision model. Whether you're processing images for analysis, generating visual content, or building AI-driven applications,
16 Oct 2024 5 min read
First step to the world of AI: installing a local LLM
Local LLM

First step to the world of AI: installing a local LLM

Installing a local LLM After a long time, I've decided to embark on my journey with AI. Over this period, significant progress has been made in the field. Today, we're surrounded by numerous Large Language Model (LLM) models, tools, and agents. It's therefore timely
08 Oct 2024 3 min read
Page 1 of 1
Generative AI with Local LLMs © 2025
  • Sign up
Powered by Ghost