Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
ru552
on April 17, 2024
|
parent
|
context
|
favorite
| on:
Mixtral 8x22B
easiest is probably with ollama [0]. I think the ollama API is OpenAI compatible.
[0]
https://ollama.com/
talldayo
on April 17, 2024
|
next
[–]
Most inference servers are OpenAI-compatibile. Even the "official" llama-cpp server should work fine:
https://github.com/ggerganov/llama.cpp/blob/master/examples/...
pants2
on April 17, 2024
|
prev
[–]
Ollama runs locally. What's the best option for calling the new Mixtral model on someone else's server programmatically?
Arcuru
on April 17, 2024
|
parent
[–]
Openrouter lists several options:
https://openrouter.ai/models/mistralai/mixtral-8x22b
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search:
[0]https://ollama.com/