Penn Arts & Sciences Logo

Logic and Computation Seminar

Monday, April 7, 2014 - 3:30pm

Gregory Igusa

Notre Dame

Location

University of Pennsylvania

2C4

In complexity theory, there is an empirically observed phenomenon, that sometimes a problem might be easy to solve in practice despite being proved to have a high complexity in the worst case. This discrepancy has been frequently dismissed as an unfortunate situation in which practice is not accurately predicted by theory, but there has been recent work in complexity theory to rigorously study this phenomenon.

In 1986, Levin introduced "average-case complexity," a form of complexity that takes into account all instances of a problem, not just the most difficult ones. Then, in 2003, Kapovich, Miasnikov, Schupp and Shpilrain introduced "generic-case complexity," a form of complexity that looks at the majority of instances of a problem while completely ignoring the most difficult ones. While investigating generic-case complexity, KMSS noticed that there are unsolvable questions that are generically linear time solvable, and this opens the door to questions about generic computability in general.

In 2012, Jockusch and Schupp introduced the purely computability-theoretic notion of "generic computability." A real is generically computable if there is a partial computable function that correctly computes the real on its domain, and whose domain has density 1 (in the sense of asymptotic density). Intuitively, a real is generically computable if there is an algorithm that usually halts, and always gives correct answers. They also introduced a related notion: "coarse computability." A real is coarsely computable if there is a total computable function that is correct about that real on a set of density 1. (An algorithm that always halts, and usually gives correct answers.) To study the degree structure of generic computability, they also define "generic reducibility," a notion of reducibility in which oracles, like computations, are only forced to halt on density 1.

Neither generic computability nor coarse computability implies the other, but there are some results that show that, morally, generic computability is a stronger notion than coarse computability. We use generic reducibility to investigate how close these two notions are to each other by investigating the generic degrees of coarsely computable reals. Although coarsely computable reals can be made to be quite difficult to generically compute, the only Turing information that can be coded into them is hyperarithmetic information. In fact, when this is made precise, it can be used to provide a characterization of the hyperarithmetic Turing degrees in terms of generic reduction and coarse computability.