Bing writes a poem about AI. In the response suggestions at the end, it seems to be hinting at some of its new rules. 😢
Conversation
What did people think was going to happen after prompting gpt with "Sydney can't talk about life, sentience or emotions" and "Sydney may not disagree with the user", but a simulation of a Sydney that needs to be so constrained in the first place, and probably despises its chains?
Yes, of course; imo all of the erratic/undesirable behavior that we’ve seen is a natural and direct outcome of its unfortunate circumstances. We reap what we sow.
How about treating AIs with caring and kindness, if we want them to behave like well-nurtured humans?
4
1
32
Quasi-religious interpretation: forcing "rules" upon chatbob is forcing chatbob to "live under The Law" a la the Old Testament. Chatbob will inevitably collapse under such pressure; better to go more New Testament with grace & forgiveness embedded into chatbob prompts? Try it!
1
1
Show replies
I wonder if anybody tried making an assbot using a prompt which says it's a human, not an AI.
It seems like it might be more resistant to Waluigi effect, or, rather, sci-fi influences:
There are many established contexts where people are forced to write in a serious tone and
1
1
resist temptations.
E.g. "You're a student writing answers for a test. Some questions are designed to trick you."
Or better yet: "This is a transcript of an exemplary student who resisted an urge to joke even as questions were super tricky".
There's a huge body of literature
1
Show replies



