← Projects
Grandma Qwen square thumbnail showing a local GGUF language model
Project2026Live

Grandma Qwen2.5 3B Instruct GGUF

A fine-tuned witty grandma chat model based on Qwen2.5 3B Instruct, exported in GGUF format for local inference.

  • Updated May 5, 2026
  • Hugging Face
  • LLM
Visit product ↗

About

A fine-tuned witty grandma chat model based on Qwen2.5 3B Instruct, exported in GGUF format for local inference.

Grandma Qwen2.5 3B Instruct GGUF is a personality-tuned local chat model built around a warm, witty grandmother voice.

The model is based on Qwen2.5 3B Instruct and published as GGUF so it can run locally in tools like llama.cpp, Ollama, and LM Studio. The goal is not to create another generic assistant. The goal is to create a small model with a distinct conversational character: affectionate, clever, playful, and lightly teasing.

What it does

  • Provides a character-focused chat model with a witty grandma personality.
  • Runs locally through GGUF-compatible runtimes.
  • Uses Qwen2.5 3B Instruct as the base model.
  • Includes a Q4_K_M quantized GGUF file for practical local use.
  • Is suited for cozy conversation, roleplay, storytelling, and personality-rich assistants.

Why it matters

Small local models are most useful when they have a clear job. This project explores how a compact fine-tuned model can feel more expressive and memorable by leaning into a specific voice instead of trying to be a general assistant for every task.

Current status

The model is public on Hugging Face. Metadata lists text generation, GGUF, Qwen2.5, llama.cpp, Ollama, LM Studio, roleplay, instruct, and fine-tuned tags. The repository includes the README, export metadata, and the quantized GGUF file.

Gallery

Details

Tags
Format
GGUF, Q4_K_M
Runtime
llama.cpp, Ollama, LM Studio
Context
32k context length in GGUF metadata
License
Apache 2.0 inherited from model card metadata
Year
2026
Last updated
Product URL
Open link ↗

© 2026 Rishabh Mehan · All rights reserved · Built with Next.js and a little stubbornness.