Top-level takeaway of tonight's catch-up reading on ML... I still find it hard to take AI Risk seriously as a special problem here. There are definite risks but they don't seem qualitatively different from other kinds of engineering risk.
Computer viruses already have that ability in a limited way (independently self-replicating only within specific computer networks, not the open world). Mutating memes in general qualify if you treat human brains as a replication substrate with low agency