Via slashdot -
'Forget ChatGPT: Why Researchers Now Run Small AIs On Their Laptops'
Nature published an introduction to running an LLM locally,
which links to
5 easy ways to run an LLM locally | InfoWorld
For good (close to real-time) performance, would need 64 GB or RAM etc. But if speed of response is not important, we can use swap to run larger models than RAM allows, https://ryanagibson.com/posts/run-llms-larger-than-ram/
No comments:
Post a Comment