Show simple item record

dc.creatorVijayanarasimhan, Sudheendra
dc.date.accessioned2011-06-02T14:33:43Z
dc.date.accessioned2011-06-02T14:34:23Z
dc.date.available2011-06-02T14:33:43Z
dc.date.available2011-06-02T14:34:23Z
dc.date.created2011-05
dc.date.issued2011-06-02
dc.date.submittedMay 2011
dc.identifier.urihttp://hdl.handle.net/2152/ETD-UT-2011-05-3014
dc.descriptiontext
dc.description.abstractVisual recognition research develops algorithms and representations to autonomously recognize visual entities such as objects, actions, and attributes. The traditional protocol involves manually collecting training image examples, annotating them in specific ways, and then learning models to explain the annotated examples. However, this is a rather limited way to transfer human knowledge to visual recognition systems, particularly considering the immense number of visual concepts that are to be learned. I propose new forms of active learning that facilitate large-scale transfer of human knowledge to visual recognition systems in a cost-effective way. The approach is cost-effective in the sense that the division of labor between the machine learner and the human annotators respects any cues regarding which annotations would be easy (or hard) for either party to provide. The approach is large-scale in that it can deal with a large number of annotation types, multiple human annotators, and huge pools of unlabeled data. In particular, I consider three important aspects of the problem: (1) cost-sensitive multi-level active learning, where the expected informativeness of any candidate image annotation is weighed against the predicted cost of obtaining it in order to choose the best annotation at every iteration. (2) budgeted batch active learning, a novel active learning setting that perfectly suits automatic learning from crowd-sourcing services where there are multiple annotators and each annotation task may vary in difficulty. (3) sub-linear time active learning, where one needs to retrieve those points that are most informative to a classifier in time that is sub-linear in the number of unlabeled examples, i.e., without having to exhaustively scan the entire collection. Using the proposed solutions for each aspect, I then demonstrate a complete end-to-end active learning system for scalable, autonomous, online learning of object detectors. The approach provides state-of-the-art recognition and detection results, while using minimal total manual effort. Overall, my work enables recognition systems that continuously improve their knowledge of the world by learning to ask the right questions of human supervisors.
dc.format.mimetypeapplication/pdf
dc.language.isoeng
dc.subjectArtificial intelligence
dc.subjectActive learning
dc.subjectObject recognition
dc.subjectObject detection
dc.subjectCost-sensitive learning
dc.subjectMulti-level learning
dc.subjectBudgeted learning
dc.subjectLarge-scale active learning
dc.subjectLive learning
dc.subjectMachine learning
dc.subjectVisual recognition system
dc.titleActive visual category learning
dc.date.updated2011-06-02T14:34:23Z
dc.description.departmentComputer Sciences
dc.type.genrethesis*
thesis.degree.departmentComputer Sciences
thesis.degree.disciplineComputer Science
thesis.degree.grantorUniversity of Texas at Austin
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy


Files in this item

Icon

This item appears in the following Collection(s)

Show simple item record