Are you saying your robot lawn mower has no intention of mowing the lawn either? Wait are you saying my robot lawn mower has intentions?
-
-
Replying to @GaneshNatesh @NoelSharkey and
Not I.
@NoelSharkey might have...insofar as he asked the question: "Did the robot lawn mower decide to come and mow your lawn?" As for teenagers, the jury is still out regarding intention when it comes to matters of household chores.1 reply 0 retweets 0 likes -
Replying to @David_Gunkel @NoelSharkey and
Pretty sure
@NoelSharkey's ques was rhetorical & he doesnt think robot mowers have intentions. Your analogy is faulty by comparing one sys capable of intentions to another without it around an issue that concerns that very intentionality.1 reply 0 retweets 0 likes -
Replying to @GaneshNatesh @David_Gunkel and
It may be perfectly sensible to describe a robot lawnmower as deciding to mow your lawn, if the robot lawnmower considered a variety of alternative possibilities first, and then chose one.
3 replies 0 retweets 0 likes -
Replying to @evansd66 @GaneshNatesh and
It doesn't cconsider. It follows the routine. Sensory input/perceive-categorize-on-the-background-of-the-training-data/NN >> act. It's Math, iterated, Dylan, plus energy ON/OFF to start mowing.
1 reply 0 retweets 0 likes -
Replying to @TweetinChar @GaneshNatesh and
But it makes sense to describe the lawnmower in this way if it really does have a lot of options to choose between. Just like it makes sense to describe humans in this way, although of course humans too are just math, iterated, plus energy
1 reply 0 retweets 1 like -
Replying to @evansd66 @GaneshNatesh and
I know that some think that the map is the territory. It isn't. Plus: perception != understanding. See: Psychology 101
@GaryMarcus has questioned the hubris of taking AI models for true abstractions of human neural processes.1 reply 0 retweets 2 likes -
Replying to @TweetinChar @GaneshNatesh and
Actually,
@GaryMarcus questioned a very specific thing - whether neural nets are good abstractions of human nervous systems. On the broader question of AI models in general, he certainly does think that some of them *are* good models for human mental processes.2 replies 0 retweets 1 like -
Replying to @evansd66 @GaneshNatesh and
Even a bad abstraction requires the complete understanding of the system. No expert would ever say that we know how the human mind works. We do not have that understanding that would allow a true abstraction.
1 reply 0 retweets 2 likes -
Replying to @TweetinChar @GaneshNatesh and
We certainly don't have a complete understanding of how the human mind works. But we have a good general framework - the computational theory of mind.
2 replies 0 retweets 3 likes
I agree with this - and also recognize that it's not been fully proven.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.