Visual perception

Model
Digital Document
Publisher
Boca Raton, Fla.
Description
The remarkable genius of Gerard Manley Hopkins' visual perception,
as revealed in his journals and poems, is a product of the intensity
with which the poet conceives a thing in terms of the physical action
prompted by it, and is the result of the vibrant joining of perceiver
and percept. He defines a scene so that the reader may see and praise
God, the Creator of each thing in the landscape. The joining of God,
perceiver, and percept is a dynamic communion charged with energy.
According to Hopkins, the flow of language should match the original
sensation of the single unified effect upon the beholder of the scene;
such a sensation appears in direct relationship to the intensity of the
poet's visual interpretation of the scene.
Model
Digital Document
Publisher
Florida Atlantic University
Description
A paired-comparisons procedure was used to obtain
relative duration judgements of identical pairs of normal
(e.g. A vs. A) or rotated (e.g.V vs.V) letters. Each
pair of letters was presented simultaneously for a duration
of 50msec, with one letter in the LVF (left visual field,
right hemisphere), and one in the RVF (right visual field,
left hemisphere). It was hypothesized that LVF presentations
of rotated letters would have a greater apparent
duration . This was based on Hock, Kronseder, and Corcoran's
(1975) demonstration that rotated letters presented in the
LVF produce longer reaction times than RVF presentations on
a visual comparisons task. The results were that subjects' "left" vs. "right"
responses did not differ significantly for any of the
conditions. Methodological considerations were cited as a
possible reason for the failure to confirm the present
hypo thesis.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Individual differences were obtained in a task requiring
the reproduction of familiar and unfamiliar dot patterns.
These individual differences were related to Hock's (1973)
distinction between Ss emphasizing analytic vs. structural
processes. For some Ss (structural), reproductive performance
was facilitated by past experience, presumably because
these Ss acquired a structural organizational scheme of
knowledge. For the other Ss (analytic), reproductive performance
was retarded by past experience, presumably because
these Ss acquired a knowledge of "distinctive" features.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Rock's procedure for separating the effect of objective
and retinal spatial reference by varying stimulus orientation
and body posture was used in conjunction with the "same-different"
reaction time paradigm. It was predicted that
the individual differences in perceptual processing (analytic
and structural) obtained by Hock (1973) would involve
different determinants of spatial reference, these being
retinal reference for analytic processing and objective
reference for structural processing. The results show that
analytic subjects as hypothesized, referenced perceptual
information to a retinal coordinate system. Structural
subjects however, seemed to reference perceptual information
to both objective and retinal coordinates. The results for
structural subjects were attributed to the unexpected finding
that subjects who were structural while upright, became
analytic when in a reclining position. The latter finding
suggested that Rock's methodology for separating the effects
of retinal and objective orientation relies on the subjects
employing the same mode of processing in all bodily postures.
Model
Digital Document
Publisher
Florida Atlantic University
Description
A "same-different" reaction time paradigm was used to
investigate the influence of context on the perception of
multiple object scenes consisting of "real-world" objects.
The relationships among these objects were manipulated to
compose four different contextual arrangements. This enabled
an investigation of three aspects of context: familiarity, physical plausibility, and belongingness.
Differences in reaction time between the four levels of context
were significant for both same and different responses.
Furthermore, a correlational analysis indicated individual
differences in the use of contextual effects. Those subjects
who were most influenced by whether or not the objects
belonged together, were least influenced by the disruption
of the rules of physical plausibility, and vice versa. Correlational analyses concerned with the relationship
between individual differences in context effects and emphasis
on structural versus analytic processing (Hock, 1973) were
insignificant, though in the predicted direction.
Model
Digital Document
Publisher
Florida Atlantic University
Description
This study tested the hypothesis that the perception of 2-flash apparent motion (points of light are briefly presented in succession at a nearby locations) is the outcome of competition between two opposing motion directions activated by the stimulus. Experiment 1 replicated previous results obtained using 2-flash stimuli; motion was optimal for a non-zero inter-frame interval (Kolers, 1972; Wertheimer, 1912). In Experiment 2, stimuli were pared down to a single luminance change toward the background at one location, and a single luminance change away from the background at one location at another. Results were consistent with apparent motion being specified by the counter-changing luminance; motion was optimal for a non-zero inter-frame interval. A subtractive model based on counter-change stimulating opposing motion directions did not account for the results of the 2-flash experiment. An alternative model based on the combined transient responses of biphasic detectors is discussed.
Model
Digital Document
Publisher
Florida Atlantic University
Description
This dissertation deals with novel vision-based motion cues called the Visual Threat Cues (VTCs), suitable for autonomous navigation tasks such as collision avoidance and maintenance of clearance. The VTCs are time-based and provide some measure for a relative change in range as well as clearance between a 3D surface and a moving observer. They are independent of the 3D environment around the observer and need almost no a-priori knowledge about it. For each VTC presented in this dissertation, there is a corresponding visual field associated with it. Each visual field constitutes a family of imaginary 3D surfaces attached to the moving observer. All the points that lie on a particular imaginary 3D surface, produce the same value of the VTC. These visual fields can be used to demarcate the space around the moving observer into safe and danger zones of varying degree. Several approaches to extract the VTCs from a sequence of monocular images have been suggested. A practical method to extract the VTCs from a sequence of images of 3D textured surfaces, obtained by a visually fixation, fixed-focus moving camera is also presented. This approach is based on the extraction of a global image dissimilarity measure called the Image Quality Measure (IQM), which is extracted directly from the raw data of the gray level images. Based on the relative variations of the measured IQM, the VTCs are extracted. This practical approach to extract the VTCs needs no 3D reconstruction, depth information, optical flow or feature tracking. This algorithm to extract the VTCs was tested on several indoor as well as outdoor real image sequences. Two vision-based closed-loop control schemes for autonomous navigation tasks were implemented in a-priori unknown textured environments using one of the VTCs as relevant sensory feedback information. They are based on a set of IF-THEN fuzzy rules and need almost no a-priori information about the vehicle dynamics, speed, direction of motion, etc. They were implemented in real-time using a camera mounted on a six degree-of-freedom flight simulator.
Model
Digital Document
Publisher
Florida Atlantic University
Description
When perceivers examine a visual scene, they can control the extent to which their attention is either narrowly focused or spread over a larger spatial area. The experiments reported in this dissertation explore the consequences of narrow vs. broad attention for simple spatial discriminations as well as more complex cooperative interactions that are the basis for the self-organization of coherent motion patterns. Subjects' attentional spread (narrow or broad) is manipulated by means of a primary, luminance detection task. In conjunction with the luminance detection task is a secondary, spatial discrimination or detection task, which differs in the four reported experiments. In Experiment 1, the discrimination of misalignment of two visual elements is enhanced by narrowly focused attention. In Experiment 2, discrimination of horizontal spatial separation of two visual elements is improved for small inter-element distances by narrow attention and for relatively large inter-element distances by broad attention. Experiment 3 shows that the inter-element distance among counterphase-presented visual elements for which unidirectional and oscillatory motion patterns are observed with equal frequency depends on subjects' attentional spread. Narrow attention favors the oscillatory pattern and broad attention favors the unidirectional pattern. Experiment 4 shows that attentional spread has a minimal effect on the detection of motion, and, additionally that attentional effects on simple spatial judgments (Experiments 1 and 2) are too small to account for the large shift in the equi-probable boundary of reported unidirectional and oscillatory motion patterns found in Experiment 3. Therefore, it is concluded in conjunction with Hock and Balz's (1994) differential gradient model, that attentional spread influences the self-organization of unidirectional and oscillatory motion patterns through its effects on the relative strength of facilitating and inhibiting interactions among directionally selective motion detectors.
Model
Digital Document
Publisher
Florida Atlantic University
Description
This dissertation deals with vision-based perception-action closed-loop control systems based on 2-D visual cues. These visual cues are used to calculate the relevant control signals required for autonomous landing and road following. In the landing tasks it has been shown that nine 2-D visual cues can be extracted from a single image of the runway. Seven of these cues can be used to accomplish parallel flight and glideslope tracking tasks of the landing. For the road following task, three different algorithms based on two different 2-D visual cues are developed. One of the road following algorithms can be used to generate steering and velocity commands for the vehicle. Glideslope tracking of the landing task has been implemented in real-time on a six-degree-of-freedom flight simulator. It has been shown that the relevant information computed from 2-D visual cues is robust and reliable for the landing tasks. Road following algorithms were tested successfully up to 50km/h on a US Army High Mobility and Multipurpose Wheeled Vehicle (HMMWV) equipped with a vision system and on a Denning mobile robot. The algorithms have also been tested successfully using PC-based software simulation programs.
Model
Digital Document
Publisher
Florida Atlantic University
Description
When motion occurs in a scene, the quality of video degrades due to motion smear, which results in a loss of contrast in the image. The characteristics of the human vision system when smooth pursuit eye movements occur are different from those when the eye fixates on an object such as a video screen during motion. Smooth pursuit eye movements dominate in the presence of dynamic stimuli. In the presence of smooth pursuit eye movements, the contrast sensitivity for increasing target velocities shifts toward lower spatial frequencies. The sensitivity for low spatial frequencies during motion is higher than for a stationary case. This dissertation will propose a method to improve the perceptual quality of video using temporal enhancement prefiltering technique based on the characteristics of Smooth Pursuit Eye Movements (SPEM). The resulting technique closely matches the characteristics of the human visual system (HVS). When motion occurs, the eye tracks the moving targets in a scene as opposed to fixating on any portion of the scene. Hence, psychophysical studies of smooth pursuit eye movements were used as a basis to design the temporal filters. Results of experiments show that temporal enhancement results in improved quality by increasing the apparent sharpness of the image sequence. In this dissertation, a study of research describing how motion affects the image quality at the camera lens and the human eye is presented. This dissertation uses that research to develop a temporal enhancement technique to improve the quality of video degraded by motion.