Penn Arts & Sciences Logo

Logic and Computation Seminar

Monday, April 26, 2004 - 4:30pm

Toniann Pitassi

University of Toronto and IAS

Location

University of Pennsylvania

DRL 4C8

In this talk we prove new upper and lower bounds on the proper Probably Approximately Correct (PAC) learnability of decision trees, DNF formulas, and intersections of halfspaces. Several of our results were obtained by exploring a new connection between automatizability in proof complexity and learnability. After explaining this basic connection, we will prove the following new results.

(1) We give new upper bounds for proper PAC learning of decision trees and DNF, based on similar known algorithms for automatizability of Resolution.

(2) We show that it is not possible to PAC learn DNF by DNF in polynomial-time unless NP is included in BPP. We also prove the same negative result for proper PAC learning of intersections of halfspaces.

(3) We show that decision trees cannot be proper PAC learned, under a different (less standard) complexity-theoretic assumption.

This is joint work with Misha Alekhnovich, Mark Braverman, Vitaly Feldman, and Adam Klivans.