The evolutionary advantage of the human perception and sensory systems have evolved to aid in both food gathering and survival. The same sensory perception systems that allow humans to evaluate a potential threat also help with more mundane signals such as traffic signs, web applications, and cellular phone interfaces (Sarkar et al., 1993). Human interaction and performance with the world or products are affected by the quality of the information that we perceive; therefore, it is critical to consider how information are displayed in design—of interfaces, controls, or signs—to be easily perceptible (Proctor et al., 2018). From the scientific bottom-up perspective of the stimulus world, this evaluation utilizes the information-processing approach to discuss the significance of signal strength as the foundation of interaction design, with an addition of a science-based product review of a self-checkout system.
Sensory Perception
There are five sensory organs in humans that work in the perception of environmental signals. The two primary senses—vision and sound—are wave based and have the properties of frequency/wavelength and amplitude (Spielman 2020). With light, frequency is analogous to color and amplitude to luminance. With sound, it is frequency and pressure. Touch, smell, and taste implement sensory neurons for signal detection similar to vision and hearing. For those who are non-visually impaired, vision is the most important and is followed by auditory (Enoch et al., 2019). This review concentrates on visual perception and discusses the role of the eye in signal detection and processing of external visual stimuli. The roles of signal detection and contrast will be considered in how the eye processes external visual signals.
Signal Detection
Signal detection theory (SDT) is applicable to any system where a distinction must be made between two separate signals, and classically it has been applied to discriminate between a desired signal (the stimulus) and an interfering noise signal (Stanislaw et al., 1999). Visual signal detection is achieved by luminance contrast which is primarily the difference between the luminance of two neighboring signals, typically, a target signal and its background (Karetsos et al., 2021). A well-designed signal needs to be balanced against the Just-Noticeable-Difference (JND) for minimal detection and the opposite extreme of being too strong in which case a signal might cause fatigue or eventually be ignored (Wylie et al., 2021). Humans are difference and edge detectors and from a bottom-up signal processing standpoint, abrupt-onset signals have a higher attention gathering capability than limited-onset or no-onset signals (Sunny et. al, 2013). Overly strong signals can be used if the intention is to call an action that requires immediate attention, however thought needs to go into their design (Sewell et al., 2011). Signals that are close to JND can be problematic for elderly populations or people with visual impairments as they may not be able to distinguish the signal from the background (Humes et al., 2009).
Contrast
Contrast is the amount of light or darkness an object has relative to its background (Karetsos et al., 2021). Contrast is achieved by luminance, saturation, hue, magnitude, and color combination. Contrast is often expressed in terms of Contrast Sensitivity (CS) which is the ability to perceive sharp, clear outlines on small objects (Kaur et al., 2022). Humans are difference detectors and hence luminance contrast, which is the difference between the luminance of a target signal and the luminance of its background, is more important in signal detection than absolute luminance. Luminance contrast is critical to vision and without this, humans have no ability to detect spatial or temporal patterns (Kaplan, 2008). The luminance emitted by an object is not the light that typically reaches the eye, as light can be reflected, absorbed, and refracted on its way to the visual sensory systems which in turn leads to a reduced signal (Clery et al., 2013). Brightness is a subjective term and is the perceived lumens that are detected by the rods and cones in our eyes (Rossi et al., 1999). Hue is what most people associate with color, and saturation is the purity of that color. Balancing saturation against hue is a design consideration as given a particular interface, users will typically detect higher saturation items over a lower saturation item even if the lower saturation items is larger (Wetzel, 2019). The ability of the human vision system to detect signals is affected by both the magnitude of the signal, and spatial frequency. A system of low spatial frequencies, where a number of small contrasting objects are closely aligned would be perceived with course details. A high spatial frequency system of larger objects with more spacing will shows fine details (Kauffmann et al., 2014). Environmental conditions (e.g., outdoors in a natural daytime or nighttime automobile dashboard) also have an effect in determining the proper magnitude for a signal (Norman et al., 2022). In regard to color combination, maximum contrast is achieved by using complimentary colors, although this can have a disturbing and fatiguing visual affect. Optimum contrast can be achieved via the split complement theory, where as an example, a primary object would be in red, and its background color would be a mix of its complementary color blue mixed with green. From a design’s perspective, human perception of an object’s color also depends on the color of the background.
Visual Sensory System / The Anatomy of the Eye
Initial contact with the human visual sensory system is through the front of the eye where the cornea, pupil, and lens locate. These three components work in combination to provide a sharply focused, and inverted image on the retinal area in the rear of the eye. The cornea is responsible for focusing light rays on the retina while also offering a protective layer for the internal parts of the eye (Sridhar, 2018). The lens, via ciliary muscles, will bend to focus the light into a sharp image, and is also responsible for accommodation which provides the ability to transition focusing between near and distant objects. In response to brightness, pupils constrict or dilate to control the amount of light that they send to the retina, or it would require additional mental effort for human to perceive objects (Mathôt, 2018).
Visual signals are next processed by the retinal system in the rear of the eye, which consists of photoreceptors, a fovea, various neuronal cells, and an optic nerve. Around the optic nerve, there's a small area called the fovea vision, our area of sharp focus. The fovea gives us high visual acuity, color sensitivity, and is the primary area for processing visual information in daylight, or well-lit conditions (Stewart et al., 2020). The retina in the human eye contains two types of photoreceptors which start the processing of the signal received via the cornea, lens, and pupil system: rods and cones (Masland, 2012). Rods are more numerous, are located in the eye periphery, and are responsible for processing scotopic, e.g., low-light vision. Rods pick up light from all directions and are the primary signal detectors for peripheral vision, night vision, and motion detection. Rods are more light-sensitive than cones, but they do not perceive color and offer limited visual acuity (Gloriani et al., 2019). The human eye contains over 120 million rod cells and approximately 6 million cone cells (Kim et al., 2011).
Cones are densely populated in the fovea area which allows for high visual acuity (Gloriani et al., 2019). Cones come in long red wavelength (L), medium green wavelength (M), and short blue wavelength (S) variations—each of which processes light of different wavelengths. An overlap of the frequencies processed by the long and medium cones allows the detection of yellow. Color vision comes from the ability of our neural systems and brain to process the information detected by these three cone types, and a lower-than-normal amount, or abnormalities of one type of cone can cause color vision loss (Zhang et al., 2019). Cones cannot process low light signals and hence the fovea area is not used in darkened or night conditions. The photoreceptor cells (rods and cones) transmit information through a system of horizontal, bipolar, and amacrine cells which in turn transmit the information to the retinal ganglion cells (RGCs) (Masland, 2012). The horizontal and amacrine cells exhibit lateral inhibition where a stimulated neuron can inhibit the signal transmissions of its surrounding neurons; lateral inhibition is crucial to spatial sensitivity, contour, and edge detection (Yeonan-Kim et al., 2016)— to make sense of our world, humans have evolved neurologically to amplify and accentuate the edges and contours of the world. The retinal ganglion cells are neurons that connect the input from the retinal system to the brain’s visual processing system via electrical impulses. This processing is done via the RGCs center-surround receptive fields which are organized in relation to the lateral inhibitions created by the horizontal and amacrine cells (Cook et al., 1998). RGCs come with one of three types of receptive fields—ON, OFF, or ON-OFF—depending on how they respond to a stimulus, and the types appear in about equal numbers. RGCs ON cells increase their firing rate in response to increased light stimulus, OFF cells with a decrease, and ON-OFF for both conditions (Weinbar et al., 2018). The ratio of photoreceptors to RGCs is approximately 100 to 1, although the convergence ratio is not fixed (Kim et al., 2021). In the densely packed fovea area, the ratio of photoreceptors to RGCs is near 1 cone to 1 RGC which accounts for its high visual acuity, while in the peripheral areas the ratio can be near 100 rods to 1 RGC which accounts for peripheral vision being more blurred. Visual information as received by the retina is not transmitted as is to the brains visual processing system, and instead it is first pre-processed by the RGCs. The more than 18 types of RGCs extract information such as spatial information, motion, textures, and light levels and transmit this to the appropriate parts of the visual processing system (Kim et al., 2021).
Design Review
A shopper stopped at an unfamiliar CVS location for the first time. The sun was setting on a February evening of an EST zone. She went inside the store in haste but by the time her essentials were gathered, she saw that it was dark outside through the windows. As the shopper did at all CVS stores, she walked up to a self-checkout to pay for the items. Upon arriving at the counter and seeing that the exit door was on her left, she set the basket down on the right tray, scanned in the items, then placed them on the left tray. Next, she looked for the payment option but kept on receiving an error message: “Please place your item in the bagging area.”
Self-Checkout System (SCO)
The product being reviewed is a Toshiba SCO at a CVS/Pharmacy shown in Figure 1. This is not a discussion of prior knowledge, however, the user would normally assumes which side the bagging area is on, scans in the product barcodes, then puts all the scanned items onto the bagging area; the user then proceeds to the payment process if the total weight matches the expected weight on the bagging tray.
Customer Journey
In figure 1, the shopper saw a self-checkout counter that had six significant areas with two obvious signals. Her eyes first noticed the word ‘OPEN’ on the visual display terminal (VDT), then moved up to see the ‘NO CASH’ sign that indicated a cashless transaction. In figure 2, the main menu popped up on the VDT’s screen when she arrived at the counter. She immediately saw another signal that the SCO system used a camera-based technology to monitor her transaction via the image of herself at the top left area (A)—even though the red “Monitoring in Progress” texts were barely visible, perhaps they were not meant to be detected. Ambient lighting affected the illumination from the visual display and movements of the eyes (Wilkins, A., 2016) so she did not see the instruction area that said, “Scan Item and Place in Bag” (C), with a red arrow pointing to the right. Her eyes also missed the accessibility button (H) at the bottom center with its blue color blending into what seemed to be a light blue or grayish background of the screen. Although having dark text on lighter background (positive polarity) is a high display luminance advantage (Buchner et al., 2009), areas D–E with light text on darker background (negative polarity) have appropriate contrast ratios averaging at 7 so she was able to locate them without difficulty. However, she missed seeing the texts in area G, “Request Help”, due to its contrast ratio of only 3.
Design Recommendation
The shopper finally understood that the error message was through her first interaction at the SCO when she placed the shopping basket on the right at the bagging area. The weight of the basket was obviously not the expected weight of all the scanned items, therefore, prompted the error message. To prevent the same confusion for other users, Toshiba should clearly mark the basket and bagging areas either with texts, icons, or both for better signal detections. Creating a stimulus that is detectable by all users involve considerations for those with color blindness or age-related decline in color perception. Color combination for areas C–G of figure 2 should use variation in the yellow-blue direction and avoid yellow-white as it is among the confusing pairs (Ishihara et al., 1999). Area C in figure 2 uses the Cornsweet contour to emphasize edge contrast for the low-saturated-red arrow, however, the arrow still gives a weak signal of low symbol luminance against the grayish background; the shopper would most likely be able to detect the arrow better with a higher screen luminance combination (Lin, C.C., 2005) if changing it to a high-saturated green. Increasing the size of the arrow and space between area C and areas D–G would also enhance its differences from the other segments on the display. Finally, the accessibility button (H) in figure 2 has an issue with spatial frequency that makes it difficult to be discriminated from its surrounding; creating more space around it would help to pop it out.
Conclusion
Toshiba can enhance the design of this SCO and components from making good use of the knowledge offered by the attributes of the perceptual processing systems. Signal detection is the foundation of the sensory systems transmitting stimulation to the processing brain. Recognizing these principles is beneficial to providing quality product and user experience.
References
Buchner, A., Mayr, S., & Brandt, M. (2009). The advantage of positive text-background polarity is due to high display luminance.Ergonomics, 52(7), 882-886. https://www.tandfonline.com/doi/full/10.1080/00140130802641635>
Cook, P. B. & McReynolds, J. S. (1998). Lateral inhibition in the inner retina is important for spatial tuning of ganglion cells. Nature Neuroscience 1(8), 714-9. doi: 10.1038/3714
Clery, S. & Bloj, H. J. (2013). Interactions between luminance and color signals: Effects on shape.Journal of Vision, 13(16). doi: https://doi.org/10.1167/13.5.16
Enoch, J., McDonald, L., & Jones, L. (2019). Evaluating whether sight is the most valued sense. JAMA Ophthalmology, 137(11), 1317-1320. doi: 10.1001/jamaophthalmol.2019.3537
Gloriani, A. H. & Schütz, A. C. (2019). Humans trust central vision more than peripheral vision even in the dark. Current Biology, 29(7), 1206-1210. doi: 10.1016/j.cub.2019.02.023
Humes, L. E., Busey, T. A., Craig, J. C., & Diane Kewley-Port, D. (2009). The effects of age on sensory thresholds and temporal gap detection in hearing, vision, and touch. Attention, Perception, & Psychophysics, 71(4): 860–871. doi: 10.3758/APP.71.4.860
Ishihara, K., Ishihara, S., Nagamachi, M., Hiramatsu, S. & Osaki, H. (2001). Age-related decline in color perception and difficulties with daily activities–measurement, questionnaire, optical and computer-graphics simulation studies. International Journal of Industrial Ergonomics, 28(3–4), 153-163. doi: 10.1016/S0169-8141(01)00028-2
Kaplan, E. (2008). Luminance sensitivity and contrast detection. The Senses: A Comprehensive Reference, Academic Press, 29-43. https://doi.org/10.1016/B978-012370880-9.00294-2
Kauffmann, L., Ramanoël1, S., & Peyrin, C. (2014). The neural bases of spatial frequency processing during scene perception. Frontiers in Integrative Neuroscience, 8(May). https://doi.org/10.3389/fnint.2014.00037
Karetsos, G. & Chandrinos, A. (2021). Contrast sensitivity measurement tests and methods. Ophthalmology Research: An International Journal, 15(2), 7-18. doi: 10.9734/OR/2021/v15i230208
Kaur, K. & Gurnani, B. (2022). Contrast Sensitivity. StatPerls Publishing. https://www.ncbi.nlm.nih.gov/books/NBK580542
Kim, U. S., Mahroo, O. A., Mollon, J. D., & Yu-Wai-Man, P. (2021). Retinal ganglion cells—diversity of cell types and clinical. Frontiers in Neurology. doi: 10.3389/2021.661938
Lin, C.C. (2005). Effects of screen luminance combination and text color on visual performance with TFT-LCD. International Journal of Industrial Ergonomics, 35(3), 0169-8141. https://doi.org/10.1016/j.ergon.2004.09.002
Masland, R. H. (2012). The Neuronal Organization of the Retina. Neuron, 76(2), 266-280. https://doi.org/10.1016/j.neuron.2012.10.002
Mathôt, S. (2018). Pupillometry: psychology, physiology, and function. Journal of Cognition, 1(1), 16. doi: 10.5334/joc.18
Norman, J. F., Baig, M., Eaton, J. R, Graham, J. D., & Vincent, T. E. (2022). Aging and the visual perception of object size. Nature, Scientific Reports, 12(1), 17148. https://doi.org/10.1038/s41598-022-22141-z
Rossi, A. F. & Paradiso M. A. (1999). Neural correlates of perceived brightness in the retina, lateral geniculate nucleus, and striate cortex. Journal of Neuroscience, 19(14), 6145–6156. doi: 10.1523/jneurosci.19-14-06145.1999
Sarkar, S., & Boyer, K.L. (1993). Perceptual organization in computer vision: A review and a proposal for a classificatory structure. IEEE Transactions on Systems, Man, and Cybernetics, 23(2), 382-399. doi: 10.1109/21.229452
Sewell, K. & Smith, P. (2011). Attentional control in visual signal detection: Effects of abrupt-onset and no-onset stimuli. Journal of Experimental Psychology Human Perception & Performance, 38(4), 1043-1068. doi: 10.1037/a0026591
Spielman, M. R. (2020). Waves and Wavelengths. In Masland, R., Albright, T., Gardner, E., The Senses: A Comprehensive Reference (pp. 29-43). Academic Press.
Stanislaw, H. & Todorov, N. (1999). Calculation of signal detection theory measures. Behavior Research Methods, Instruments, & Computers, 31(1), 137-149. doi: 10.3758/bf03207704
Stewart, E. M., Valsecchi, M., & C. Schütz, A. (2020). A review of interactions between peripheral and foveal vision. Journal of Vision, 20(2). https://doi.org/10.1167/jov.20.12.24
Sunny, M. & von Mühlenen, A. (2013). Attention capture by abrupt onsets: Re-visiting the priority tag model. Frontiers in Psychology, 4(Dec). https://doi.org/10.3389/fpsyg.2013.00958
Wilkins, A. (2016). Intermittent illumination from visual display units and fluorescent lighting affects movements of the eyes across text. Human Factors: The Journal of the Human Factors and Ergonomics Society, 28(1), 75-81. doi: 10.1177/001872088602800108
Wylie, G. R., Yao, B., Sandry, J., & DeLuca, J. (2021). Using signal detection theory to better understand cognitive fatigue. Frontiers in Psychology. doi: 10.3389/fpsyg.2020.579188
Yeonan-Kim, J. & Bertalmío, M. (2016). Retinal lateral inhibition provides the biological basis of long-range spatial induction. PLOS ONE 11(12). https://doi.org/10.1371/journal.pone.0168963
Zhang, F., Kurokawa, K., Lassoued, A., Crowell, J.A., & Miller, D.T. (2019). Cone photoreceptor classification in the living human eye from photostimulation-induced phase dynamics. Proceedings of the National Academy of Science, 116(16) 7951-7956. https://doi.org/10.1073/pnas.1816360116