You can tell if to switch that off permanently with custom instructions. It makes the thing a whole lot easier to deal with. Of course, that would be bad for engagement so they’re not going to do that by default.
You can, but in my experience it is resistant to custom instructions.
I spent an evening messing around with ChatGPT once, and fairly early on I gave it special instructions via the options menu to stop being sycophantic, among other things. It ignored those instructions for the next dozen or so prompts, even though I followed up every response with a reminder. It finally came around after a few more prompts, by which point I was bored of it, and feeling a bit guilty over the acres of rainforest I had already burned down.
I don’t discount user error on my part, particularly that I may have asked too much at once, as I wanted it to dramatically alter its output with so my customizations. But it’s still a computer, and I don’t think it was unreasonable to expect it to follow instructions the first time. Isn’t that what computers are supposed to be known for, unfailingly following instructions?
I sometimes use ChatGPT when I’m stuck troubleshooting an issue. I had to do exactly this because it became extremely annoying when I corrected it for giving me incorrect information and it would still be “sucking up” to me with “Nice catch!” and “You’re absolutely right!”. The fact that an average person doesn’t find that creepy, unflattering and/or annoying is the real scary part.
You can tell if to switch that off permanently with custom instructions. It makes the thing a whole lot easier to deal with. Of course, that would be bad for engagement so they’re not going to do that by default.
You can, but in my experience it is resistant to custom instructions.
I spent an evening messing around with ChatGPT once, and fairly early on I gave it special instructions via the options menu to stop being sycophantic, among other things. It ignored those instructions for the next dozen or so prompts, even though I followed up every response with a reminder. It finally came around after a few more prompts, by which point I was bored of it, and feeling a bit guilty over the acres of rainforest I had already burned down.
I don’t discount user error on my part, particularly that I may have asked too much at once, as I wanted it to dramatically alter its output with so my customizations. But it’s still a computer, and I don’t think it was unreasonable to expect it to follow instructions the first time. Isn’t that what computers are supposed to be known for, unfailingly following instructions?
I sometimes use ChatGPT when I’m stuck troubleshooting an issue. I had to do exactly this because it became extremely annoying when I corrected it for giving me incorrect information and it would still be “sucking up” to me with “Nice catch!” and “You’re absolutely right!”. The fact that an average person doesn’t find that creepy, unflattering and/or annoying is the real scary part.
Give me a prompt and do not include cooking recipes of any kind