It's DARPA's mission to go far beyond what can be done today. Back in the day, I pitched a project to them. Sounds like they are totally taken with AI and maybe far too impressed with Silicon Valley. Anyhow, my project didn't get funded, but what they were doing made more sense.
-
-
Replying to @CherylRofer @wellerstein
I actually think the government should make high-risk bets. I'm not sure that the corpus of academic IR has much to say about how the world works.
2 replies 0 retweets 2 likes -
Replying to @profmusgrave @CherylRofer
The argument for a lot of the DARPA AI projects at the time was, "yes, we know that THIS problem isn't necessarily the best one to solve, but if we had an idea how to solve it, it would give us an approach that could be used for other problems." FWIW
2 replies 0 retweets 1 like -
The big AI problem (again, a couple years ago) for them was "context" — e.g., can you make an AI that can rapidly and more or less reliably construct context from scratch the way a human can. Under this heading a lot of odd stuff was being funded.
1 reply 0 retweets 1 like -
E.g., "Can you teach an AI to improvise jazz in realtime?" is a DARPA AI project (whom I know one of the PIs on very well). Not because DARPA cares about jazz, but because of the context/improvisation question.
2 replies 0 retweets 2 likes -
Replying to @wellerstein @profmusgrave
The reason I said that AI is garbage is that the problem hasn't even been defined well. Humans do a great many things that computers can't. We don't know which is important, nor what intelligence is.
1 reply 0 retweets 1 like -
Replying to @CherylRofer @profmusgrave
What, you don't think that playing chess is the best indication of how human intelligence works? /s
1 reply 0 retweets 1 like -
Replying to @wellerstein @profmusgrave
I just saw an article making the point you are making sarcastically - what computers do to play chess or go is different from what humans do.
1 reply 0 retweets 2 likes -
The point of what
@DARPA is doing is that there is much more information available now than any single person can take in, particularly in real time. So they want systems that can assist analysts, provide alerts, etc. This is building a corporate intelligence that transcends +1 reply 0 retweets 0 likes -
Replying to @mgubrud @CherylRofer and
and integrates over individual personalities and biases. In the longer term, it is laying a foundation for when AI become AGI and SAI. It is future-oriented work. I hear the call for better support of IR and scholarship in general but I'm not sure it's valid to say this competes.
2 replies 0 retweets 0 likes
I understand what they're doing. I just think it's dryly amusing that they're willing to use academic output and expertise indirectly, but not that interested in supporting it. You can understand why an academic in a field like this is not enthused about being a "feeder" to an AI
-
-
Replying to @wellerstein @CherylRofer and
Sure, but you want academic IR to be supported by DARPA?
1 reply 0 retweets 0 likes -
Replying to @mgubrud @wellerstein and
Personally I don't like any of this - automation of decision making in war & diplomacy is extremely dangerous - but I think poo-pooing AI is by now extremely shortsighted also very dangerous. That is my agenda here.
0 replies 0 retweets 0 likes
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.