Most recognize the weaknesses of "teaching to the test," but the exhausting pressure to cram is only part 1 of the problem; part 2 is that bubble tests disappear from life after school while self-direction, courage, creativity, & open-ended problem-solving become crucial.
-
Show this thread
-
The demand for normally distributed student rankings is central to the problem. Diverse, self-directed projects might be hard to numerically score/rank, but "I can't evaluate a student's competencies based on real work" would be an inane claim.
2 replies 0 retweets 20 likesShow this thread -
I was once arguing that students should be able to code their own projects, even marketable products, without being limited by a rubric that might create tension between "best score" & "best product." My acquaintance: "But then how do we know they've really learned to code?"
3 replies 0 retweets 16 likesShow this thread -
Replying to @webdevMason
Well, usually these rubrics are designed, not, because instructors are incapable of evaluating work, but because it is *very* difficult to evaluate work fairly. If everything is your best guess the standard you use on the 20th project is guaranteed to be different than the 1st.
1 reply 0 retweets 0 likes -
Replying to @DiracWinsAgain
All this assumes that it is in some sense necessary to *rank* students rather than evaluate their progress individually, which — whether fair or not — has a lot of arguably concerning social effects
2 replies 0 retweets 3 likes -
Replying to @webdevMason @DiracWinsAgain
While ranking is really toxic even "evaluating their progress" goes too far in my view. If a child starts a project it is *their* project. The proper role of an educator is to help it according to the *child's criteria* - not to judge whether it fits a preset agenda.
2 replies 0 retweets 5 likes -
Lots of people have asked me to explain things they think I know. No one has EVER asked me to evaluate their progress. Who does that?
1 reply 0 retweets 3 likes -
To restate this idea in yet another way - in this video John Holt articulate this difference by contrasting quizzes and questions. (Love this clip)https://www.youtube.com/watch?v=_I1-BaU7Hg0 …
1 reply 0 retweets 2 likes -
You're both right — on reflection, I think "evaluation" implies way too much similarity to "scoring," which isn't really what I'm trying to point at, quantitatively or qualitatively. More like opportunities for iterative feedback, back-and-forth that can produce more questions
2 replies 0 retweets 3 likes
The value in that being that humans vary on their level of comfort with asking questions, particularly when they don't really know what their question is even if they can recognize when feedback is or is not helpful
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.