I can understand that. I don’t actually use chatGPT to be fair. I use a locally run open source LLM. This all being said I do think it’s important to fine tune any LLM you use to match your writing style. Else you end up with chatGPT generic style writing.
I would argue that not fine tuning a LLM to match tone and style counts as either misuse or hobbyist use.
How? GPT4All + Llama or something else? I just started dipping my toe in locally run open source LLM.
not fine tuning a LLM to match tone and style counts as either misuse or hobbyist use
You’ve hit the nail on the head with this one. I think the other commenters are right, that a lot of people will misuse the tool, but nonetheless it is an issue with the users, not the tool itself.
I can understand that. I don’t actually use chatGPT to be fair. I use a locally run open source LLM. This all being said I do think it’s important to fine tune any LLM you use to match your writing style. Else you end up with chatGPT generic style writing.
I would argue that not fine tuning a LLM to match tone and style counts as either misuse or hobbyist use.
How? GPT4All + Llama or something else? I just started dipping my toe in locally run open source LLM.
You’ve hit the nail on the head with this one. I think the other commenters are right, that a lot of people will misuse the tool, but nonetheless it is an issue with the users, not the tool itself.
My main workstation runs Linux and I use Llama.cpp. I used it with mistral’s latest largest model but I have used others in the past.
I appreciate your thoughts here. Lemmy I think, in general, has an indistinguishing anti LLM bias.