Thank you! Very useful. I am, again, surprised how a better way of asking questions affects the answers almost as much as using a better model.
Thank you! Very useful. I am, again, surprised how a better way of asking questions affects the answers almost as much as using a better model.
I need to look into flash attention! And if i understand you correctly a larger model of llama3.1 would be better prepared to handle a larger context window than a smaller llama3.1 model?
Thanks! I actually picked up the concept of context window, and from there how to create a modelfile, through one of the links provided earlier and it has made a huge difference. In your experience, would a small model like llama3.2 with a bigger context window be able to provide the same output as a big modem L, like qwen2.5:14b, with a more limited window? The bigger window obviously allow more data to be taken into account, but how does the model size compare?
Thank you for your detailed answer:) it’s 20 years and 2 kids since I last tried my hand at reading code, but I’m doing my best to catch up😊 Context window is a concept I picked up from your links which has provided me much help!
The problem I keep running into with that approach is that only the last page is actually summarised and some of the texts are… Longer.
Do you know of any nifty resources on how to create RAGs using ollama/webui? (Or even fine-tuning?). I’ve tried to set it up, but the documents provided doesn’t seem to be analysed properly.
I’m trying to get the LLM into reading/summarising a certain type of (wordy) files, and it seems the query prompt is limited to about 6k characters.
Anything that is more about talking to different parties rather than documenting and being the one to deliver. the more specialised people the better you connect, the bwtter. They will love your ability to see the patterns of the work place, your helicopter perspective. That will help them to test their ideas, to understand the concepts and what their task is all about. They will also love that you will not micro manage (as long as you dont end up hyperfocusing on their topic) and let them do their thing.
Don’t be the specialist. Don’t be the one that tries to have an eye on all the details, all the numbers. I tried to be an accountant for a while…
Totally unlike the other fielded armies globally at the moment.
/S
No, we need to be able to keep two thoughts in our heads at the same time or we are bound to repeat the mistakes. Terror and oppression is terrible regardless of what the purpetrator and the victim are called.
Exactly! That confrontation line isn’t age, gender or background. Always the haves Vs the have nots. But it is convenient when those with legitimate demands are tricked to fight windmills
Not to mention the Streisand effect. They will get the exact opposite effect which they desire. But maybe it will be the lesson Europe needs…
Oh my… Last time I read pieces like this, the new architecture was called bulldozer.
It needs to be low, but positive and keept stable. If it’s to high it will be self sustaining and increasing, if it’s negative everything stalls. 2% seems to fit the bill.
There could be an argument that 4% would have been just as good, and had the rest of the world united on 4% it would*. However, it would not have changed anything in last year’s combat of inflation. The target would have been defended just as fiercely causing just as much collateral. Only the numbers would have been slightly different.
*Ignoring for a bit those countries that has had to fight to keep inflation up.
I’m just in the beginning, but my plan is to use it to evaluate policy docs. There is so much context to keep up with, so any way to load more context into the analysis will be helpful. Learning how to add excel information in the analysis will also be a big step forward.
I will have to check out Mistral:) So far Qwen2.5 14B has been the best at providing analysis of my test scenario. But i guess an even higher parameter model will have its advantages.