- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
Anyone built their own offline LLM AI?
Posted on 4/17/26 at 10:13 pm
Posted on 4/17/26 at 10:13 pm
I'm talking something for business purposes for emails, calendar, doc review etc
I'm working on the idea of having an offline LLM AI that runs on its own laptop. Any work done would need to be based on files loaded to it from USB drives
Anyone tackled this yet?
I'm working on the idea of having an offline LLM AI that runs on its own laptop. Any work done would need to be based on files loaded to it from USB drives
Anyone tackled this yet?
Posted on 4/18/26 at 9:08 am to HailToTheChiz
Posted on 4/18/26 at 12:43 pm to HailToTheChiz
There are tons you can run locally, i run them inside n8n for workflow automation.
Creating your "own" will require a few million GPUs.
Creating your "own" will require a few million GPUs.
Posted on 4/18/26 at 12:48 pm to j1897
Yeah i don't necessarily want to "create" my own. Bad word to use.
Honestly not even sure how i would fully implement but wanted to start some research. May just need to dabble
Honestly not even sure how i would fully implement but wanted to start some research. May just need to dabble
Posted on 4/18/26 at 2:11 pm to HailToTheChiz
My old gaming computer that has a gpu runs ollama, I have a couple other older pcs that run proxmox including a VM that runs docker containers. Among those are openwebui which gives you a sort of private gpt interface, especially when you use ollama on the backend. N8n uses ollama to do some automation and any other service that can use llm things I point to ollama too.
Posted on 4/18/26 at 8:23 pm to HailToTheChiz
Google released Gemma 4 open models recently.
Posted on 4/19/26 at 7:40 am to Brisketeer
I have to run the smallest versions of those because i have no vram but the gemma4 models have been the best that I've used locally (on my hardware)
Posted on 4/19/26 at 10:26 am to Brisketeer
I use ollama and pulled gemma4. It's running on my gaming laptop with no issues at all. Depending on your goal, you'll need to create an app layer to help dial in responses from the model. I've been working on mine for a week and a half and it's 65% like gpt or Claude premium versions.
Just takes a bit of time to refine answers and such. But Gemma 4 is highly recommended, as it has a low resource requirement.
Just takes a bit of time to refine answers and such. But Gemma 4 is highly recommended, as it has a low resource requirement.
This post was edited on 4/19/26 at 10:29 am
Posted on 4/19/26 at 4:40 pm to jivy26
Also if you dont want to make your own agent, you could use Hermes; though I have not tried it
Popular
Back to top

4








