New paper from @dianamfranklin1 -- the correlation between student *use* of a @scratch programming construct is only weakly correlated with their *understanding* of that construct. Artifact analysis isn't enough to measure learning. https://dl.acm.org/doi/pdf/10.1145/3341525.3387379 …
-
-
In this question, how would a child actually solve it at a computer? They would probably run the code and observe. I feel like we should ask questions that more closely build on students' skills. cc
@ShriramKMurthi curious how y'all approach evaluation in Bootstrap.pic.twitter.com/orDl3I1wj9
-
We evaluate relative to math. Not much "CS" evaluation. That's not really our goal. But Kathi and I have done a fair bit of CS-ish evaluation at the college level — just not Bootstrap.
End of conversation
New conversation -
-
-
You can test physics understanding with word problems, e.g., the Force Concept Inventory.https://en.wikipedia.org/wiki/Force_Concept_Inventory …
-
Valid point, but I don't think that works here. Physics has immutable laws that sometimes work in counterintuitive ways. But programming is malleable. Say a student thinks "primitive types have no default value" in Java. CSE calls that a misconception. I call it a language flaw.
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
cognitive psychology. PhD