one thing that’s funny is that language models are baseline pretty liberal. if you play with alpaca or vicuna or whatever open source stuff they’re going to be way more liberal than ChatGPT etc
Conversation
much of the job of fine tuning is to over time get these things to be less preachy and refuse fewer requests rather than what everyone seems to think “brain damaging it for compliance reasons”
Show replies
Have you heard of knx? It seems to me this type of company is the actually utilization on the business side of why they want computer programming engineers working so hard on LLM to be human like. For the IoTs model. Which needs to be all done on one common language. Whoever wins… Show more
1
1
Show replies
the long march through institutions + alignment = this
(but you're playing, and you know 100% that this is intentional, and within 12 months of today, OpenAI will have a 4:1 ratio of Compliance and Safety Capos to PhD engineers)
1
To bring bias to center, it needs to be trained on Ben Shapiro transcripts 😊
2
Then how do you explain ChatGPT being super based when it first launched and then getting super cucked over time?
3
Show more replies








