to be clear: I actually don't oppose funding this, but what drives me crazy is the combination of technological superiority and the utterly misguided model of how policymakers and operators could actually use this information
-
-
I just saw an article making the point you are making sarcastically - what computers do to play chess or go is different from what humans do.
-
The point of what
@DARPA is doing is that there is much more information available now than any single person can take in, particularly in real time. So they want systems that can assist analysts, provide alerts, etc. This is building a corporate intelligence that transcends + -
and integrates over individual personalities and biases. In the longer term, it is laying a foundation for when AI become AGI and SAI. It is future-oriented work. I hear the call for better support of IR and scholarship in general but I'm not sure it's valid to say this competes.
-
I understand what they're doing. I just think it's dryly amusing that they're willing to use academic output and expertise indirectly, but not that interested in supporting it. You can understand why an academic in a field like this is not enthused about being a "feeder" to an AI
-
Sure, but you want academic IR to be supported by DARPA?
-
Personally I don't like any of this - automation of decision making in war & diplomacy is extremely dangerous - but I think poo-pooing AI is by now extremely shortsighted also very dangerous. That is my agenda here.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.