Abstract
Despite a number of studies of computational models of visual saliency, it is still difficult to predict where human would attend in a scene. One of the problems is the effect of self-motion on retinal images. Usually saliency is calculated with local motion signals, and high degree of saliency is found in motion components caused by self-motion. Since human observers usually ignore the motion caused by self-motion, the prediction accuracy of attention locations falls off. We developed a framework that reduces the saliency of the motion components caused self-motion, using a physiological model of optic flow processing that extracts object motion among self-motion signals.
Original language | English |
---|---|
Pages | 783-787 |
Number of pages | 5 |
DOIs | |
Publication status | Published - 2013 |
Event | 2013 2nd IAPR Asian Conference on Pattern Recognition, ACPR 2013 - Naha, Okinawa, Japan Duration: 2013 Nov 5 → 2013 Nov 8 |
Conference
Conference | 2013 2nd IAPR Asian Conference on Pattern Recognition, ACPR 2013 |
---|---|
Country/Territory | Japan |
City | Naha, Okinawa |
Period | 13/11/5 → 13/11/8 |
Keywords
- Gaze estimation
- Global motion
- Saliency map
- Self-motion