I’m trying to figure out how to host one myself. I’m trying to use barvarder and localai. But I am failing due to not enough knowledge and missing instructions. Any advice? did someone succeed with anything? I’d be happy to make other smaller steps at first as well. As long as I get somewhere.
It’s pretty easy with Ollama. Install it, then
ollama run mistral-7b
(or another model, there’s a few available ootb). https://ollama.ai/Another option is Llamafile. https://github.com/Mozilla-Ocho/llamafile
If low on hw then look into petals or the kobold horde frameworks. Both share models in a p2p fashion afaik.
Petals at least, lets you create private networks, so you could host some of a model on your 24/7 server, some on your laptop CPU and the rest on your laptop GPU - as an example.
Haven’t tried tho, so good luck ;)
Sounds like a really cool project, sadly i dont have much knowledge to contribute. Still, what kind of issues have you run into? Any specific errors or problems?
Maybe Serge would fit your use case.
Surge is probably the easiest way to get a basic setup. If you just want to download a model and chat, I recommend it.
If you want to be able to get into the nitty gritty or play with options besides just a chat, I recommend Text Generation WebUI.
Installing is pretty easy, then you just download your desired model from Hugging Face.
Or if you want to use it for roleplay or adventure style games, KoboldCPP is easy to set up.
I’ve heard good things about H2O AI if you want to self host and tweak the model by uploading documents of your own (so that you get answers based on your dataset). I’m not sure how difficult it is. Maybe someone more knowledgeable will chime in.
I haven’t looked into specific apps, but I have been wanting to try various trained models and figured just self hosting jupyterhub and getting models from hugging face would be a quick and flexible way to do it