Model Fine Tuning — LoRA & RAG
WIPA personal deep dive into model fine-tuning. Downloaded Qwen3 1.7B locally and fine-tuned it using LoRA (Low-Rank Adaptation). Currently implementing RAG (Retrieval-Augmented Generation) for further use. The primary goal was learning — understanding the full pipeline from model acquisition to fine-tuning to inference. More details covered in the related blog post.
Tech Stack
Python PyTorch LoRA RAG Qwen3