Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Here's a vram requirements table for fine-tuning an LLM: https://github.com/hiyouga/LLaMA-Factory?tab=readme-ov-file#...

No matter how much vram you have, there's something that doesn't fit :)



This is also how I learned that 8X7B doesn't mean "eight 7B models joined somehow".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: