Conversation

if GPT-3 can do the homework you assign your students then the homework you assign your students is fake notice what you did *not* say: "finally, GPT-3 will save us so much time analyzing international relations"
Quote Tweet
I had GPT-3 answer the kinds of questions I might ask undergraduate international relations students. These answers are all 100% AI-generated. Educators (like me) whose assessments rely mostly on student writing are going to have to adjust to a world where AI can do this.
Show this thread
Image
Image
Image
Image
21
173
939
if the homework you assign students is to build a chair and then they have to bring the chair into class and you sit on it, that is *not fake*. if a machine solved this problem for them it would *actually produce real chairs people could actually use*
4
11
238
this essay question garbage is fundamentally fake. every student knows the goal is to write some bullshit that sounds good enough that you're willing to assign it a good grade. you are not actually training the students to perform any kind of useful work and you never were
2
29
345
here's the opening of an essay i wrote as a sophomore in college. i don't remember writing this. i don't know a single thing about fMRI or the patriot act. i bullshitted this whole thing. this is what happens when you force students to write about shit they don't care about
Image
3
4
205
this whole thing might as well have been generated by GPT-3 for all anyone could tell by reading it; it contains none of my actual writing voice, which i would not get back for another 8 years. there's none of my personality or life experience or soul in here
1
3
180
GPT-3 is literally a bullshit engine. it does not have a concept of words as referring to things; it plays games with words *only*, pure syntax, no semantics. literally the thing it is optimizing for when it produces text can be condensed to "put words here that sound good"
7
49
307
how could you tell if an AI was not a bullshit engine? (not sarcasm, this is something I'm genuinely uncertain about and matters a LOT for timelines)
2
1
16
i agree that this is a good and interesting question and i don't think i have much to say about it off the top of my head. one potential test is how long of a correct mathematical proof it can generate? at some point i'd have to believe semantics were happening *somewhere*
1
7
They’re definitely not /currently/ capable of that kind of stuff, but they’re improving at an incredible rate. Recently we’ve seen GPT-like models solve IMO problems and programming competition problems. And the returns don’t seem to be diminishing (yet) as we scale them up.
1
3
Show replies