Autonomous text capturing robot using improved DCT feature and text tracking

Makoto Tanaka, Hideaki Goto

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

14 Citations (Scopus)

Abstract

When an autonomous robot tries to find text in the surrounding scene using an onboard video camera, some duplicate text images appear in the video frames. To avoid recognizing the same text many times, it is necessary to decrease the number of text candidate regions for recognition. This paper presents a text capturing robot that can look around the environment using an active camera. The text candidate regions are extracted from the images using an improved DCT feature. The text regions are tracked in the video sequence so that the number of text images to be recognized is reduced. In the experiment, we tested 460 images of a corridor with fifteen signboards including text. The number of text candidate regions is reduced by 90.1% using our text tracking method.

Original languageEnglish
Title of host publicationProceedings - 9th International Conference on Document Analysis and Recognition, ICDAR 2007
Pages1178-1182
Number of pages5
DOIs
Publication statusPublished - 2007
Event9th International Conference on Document Analysis and Recognition, ICDAR 2007 - Curitiba, Brazil
Duration: 2007 Sept 232007 Sept 26

Publication series

NameProceedings of the International Conference on Document Analysis and Recognition, ICDAR
Volume2
ISSN (Print)1520-5363

Conference

Conference9th International Conference on Document Analysis and Recognition, ICDAR 2007
Country/TerritoryBrazil
CityCuritiba
Period07/9/2307/9/26

Fingerprint

Dive into the research topics of 'Autonomous text capturing robot using improved DCT feature and text tracking'. Together they form a unique fingerprint.

Cite this