There's an interesting blog post about how much effort (and money for infrastructure) is required to train a Large Language Model (LLM) with custom data - https://blog.replit.com/llm-training
Huggingface does have free rate limited APIs for some of the smaller pre-trained models, like
https://huggingface.co/google/flan-t5-base
And they do have free autotrain for custom data,
https://huggingface.co/autotrain
More links:
https://www.philschmid.de/
Edit: Oct 2023 - Tutorial on how to use an LLM with Google Colab's Free tier -
https://betterprogramming.pub/set-up-an-llm-project-using-a-free-gpu-in-google-colab-e55453bfc760