Very large computational complexity classes like ELEMENTARY and PR are not important because at that point you're usually limited to only being able to compute the problem on tiny instances, so asymptotics as input size approaches infinity is not relevant.
There are many languages for defining programs that necessarily always halt, and these come in a wide variety of expressiveness. Not clear to me that which large complexity class they correspond to is important. Does PR have some advantage over e.g. provably total in PA?
-
-
If you can find really large complexity classes for which all programs halt and where it is feasible to decide whether a program is in that class, then that corresponds to a totality checking procedure that accepts a large set of (useful) programs.
-
Yes, I know. My point was that among these very large complexity classes, there doesn't seem to be much advantage to having a program be in the smaller of them. Why is it better to have a primitive recursive program than a program you know always halts for a more general reason?
-
You might not want to allow users to write programs that are that slow. For example, programs typeable with dual light affine logic are guaranteed to execute in polynomial time. https://arxiv.org/pdf/0710.1153.pdf …
-
Primitive recursion is already expressive enough to let you easily write programs that are far slower than any reasonable bound you might want for how slow a program runs.
-
A better example than DLAL is EAL (elementary affine logic), which reduces in ELEMENTARY time. EAL-typeable lambda calculus terms can be reduced using Lamping's optimal interaction net reduction algorithm. Weaker notions of totality do not allow for this optimal reduction, AFAIK.
-
Ok, I don't actually know what any of this is. What is the sense of "optimal" in which optimal reduction of lambda calculus terms corresponds to ELEMENTARY?
-
The optimal reduction algorithm is optimal in the sense that it always chooses the evaluation order (strict/lazy) that results in the fastest execution time. The algorithm just doesn't work on terms whose reduction takes longer than ELEMENTARY time. https://dl.acm.org/citation.cfm?id=96711 …
-
https://pdfs.semanticscholar.org/c161/2cc3dcfb04169cfd353dcf546aaf3ef81fcc.pdf … details the relationship between EAL and Lamping's algorithm. I'm not sure whether there are non-EAL-typeable lambda terms that Lamping can evaluate, but most evaluable terms are EAL-typeable.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.