I suspect automated translation between s/w standards that were not built with awareness of each other is harder than translating between 2 languages.
Conversation
Replying to
Ie people arguing that 2 Google-duplex-class AIs talking to each other would switch to more “efficient” machine talk once they detect that the counterparty is also a machine are wrong.
This is only true of 2 agents that speak the same language and know that the other does too.
1
1
3
Consider how humans communicate across a language barrier. Me and a Chinese speaker for example (I don’t know Chinese). Only 2 outcomes:
One of us asks “do you speak English?” and we switch to that if yes... OR
We do painful pointing-and-miming to develop minimum common ground.
2
1
1
For dissimilar programs, common ground will likely be data at first. They might exchange dates for example, to detect each other’s time representations, and build from there.
Computer 1: dd-mm-yyyy?
Computer 2: mm-dd-yy?
Computer 1: mm-dd-yy!
(symmetry breaking etc needed)
1
1
2
The machine learning angle here is orthogonal to the coordination/common ground problem. You need to solve both.
As s/w gets more embodied in robots/IoT, point-and-mimic-and-ground protocols will get much better, since physical world is much richer than shared data contexts.
1
1
A good way to develop intuitions around this problem is to ask: which would be the easier robot to build?
R2D2 who can shove probe into any random computing system and hack it
OR
C3PO who speaks 100s of languages?
1
1
2
The answer is, R2D2 is much easier if galaxy runs on a set of mutually intelligible co-evolved standards. Otherwise C3PO is easier.
So C3POa-to-C3POb comms will only default to R2D2a-R2D2b commas in former case.
1
2
Interesting question is whether computing systems will evolve with greater or lesser mutual awareness in the gutter as they diversify and speciate. I’ll bet on less.
1
1
1
This likely means they’ll likely mostly communicate through an “air gap” protocol based on point-mimic-ground mutual learning in shared rich physical contexts.
Your fridge and vacuum will learn each other the way a new cat learns to live with a resident dog.
2
1
3
Conceptually my claim is that when there are no good shared maps, it’s actually easier to go to the territory to construct a new shared map in most cases than to try and merge incompatible maps.
Circumstantial evidence is human communication, though that’s not a proof.
1
1
4
The only alternative I can think of is some gigantic top-down common world-modeling architecture, like a “Windows World for IoT (now with Blockchain consensus for fridges and vacuums, featuring OpenCycAlphaGoWolframAlpha!)”
2
3
I’ll throw in a pointer to Stalnaker in case anyone actually wants to go down this bunny trail of common-grounding communication web.mit.edu/philosophy/fac
1
1
