Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The original GPT4 may have been around that size (16x 110B).

But it's pretty clear GPT4 Turbo is a smaller and heavily quantized model.



Yeah, it’s not even close to doing inference on 1.8T weights for turbo queries.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: