Look into quantised models (like gguf format) these significantly reduce the amout of memory needed and speed up computation time at the expense of some quality. If you have 16GB of rm or more you can run decent models locally without any gpu, though your speed will be more like 1 word a second than chatgpt speeds
Look into quantised models (like gguf format) these significantly reduce the amout of memory needed and speed up computation time at the expense of some quality. If you have 16GB of rm or more you can run decent models locally without any gpu, though your speed will be more like 1 word a second than chatgpt speeds