I don't ever usually comment, though the video is very informative, want to support you, keep going :)
What's the difference between using docker and ollama? If you could give a detailed explanation, I'd like that
I am developing secure RAG agent for pdf reading , this is for banking domain. can you tell me which cpu based LLM model is best suitable for my requirement. also i am not able to restrict question - answer which is out of pdf using pretrained model , it giving answer to every question.
How can you make an interface or make it receive input files?
i will def be giving this a shot. now you said cores in your video. i put the cores or the threads? i have an i5 13500
Hello, im confused whether I can run any model on my Ryzen 3 3200G PC, with 16gb of ram or not?
Will try it surely!
This video is for highly advanced users only, this should've been made clear from the start. Running docker in VS Code is nothing for absolute beginners who want to run a local AI without an expensive GPU, as the title suggests.
Awesome content, could you make a video where you customize a reasoning model It further like connecting It to a folder of PDF files as database
Hey, thanks for the video.
The voice is not synchronised with your video
i love you man
nice name zen
How can i contact u zen
2:47 you so optimistic dude
I always start my prompting with the question “Was 2024 a leap year”. On simple local machines it’s telling you fairy tales. That’s disappointing. But thanks for your good explanation 👏
...or you can just install LM Studio.
phi...ok but not that powerful, i thought there might be something i don't know in this video...i would rather use openwebui or anythingllm with ollama if you just need a chat interface
Kindly make a video how to setup and make own deepseek r1 api
@zenvanriel