Publications

A toolbox for calculating quantitative image properties in aesthetics research

Published in Behavior Research Methods, 2025

Over the past two decades, researchers in the field of visual aesthetics have studied numerous quantitative (objective) image properties and how they relate to visual aesthetic appreciation. However, results are difficult to compare between research groups. One reason is that researchers use different sets of image properties in their studies. However, even if the same properties are used, the image pre-processing techniques may differ, and researchers often use their own customized scripts to calculate the image properties. To provide better accessibility and comparability of research results in visual experimental aesthetics, we developed an open-access and easy-to-use toolbox called Aesthetics Toolbox. The Toolbox allows users to calculate a well-defined set of quantitative image properties popular in contemporary research. The properties include image dimensions, lightness and color statistics, complexity, symmetry, balance, Fourier spectrum properties, fractal dimension, self-similarity, as well as entropy measures and CNN-based variances. Compatible with most devices, the Toolbox provides an intuitive click-and-drop web interface. In the Toolbox, we integrated the original scripts of four different research groups and translated them into Python 3. To ensure that results were consistent across analyses, we took care that results from the Python versions of the scripts were the same as those from the original scripts. The toolbox, detailed documentation, and a link to the cloud version are available via GitHub: https://github.com/RBartho/Aesthetics-Toolbox. In summary, we developed a toolbox that helps to standardize and simplify the calculation of quantitative image properties for visual aesthetics research. Download paper here

Recommended citation: Redies, C., Bartho, R., Koßmann, L., Spehar, B., Hübner, R., Wagemans, J., & Hayn-Leichsenring, G. U. (2025). A toolbox for calculating quantitative image properties in aesthetics research. Behavior Research Methods, 57(4). https://doi.org/10.3758/s13428-025-02632-3 https://link.springer.com/article/10.3758/s13428-025-02632-3

Need for Cognition, Cognitive Load, and Forewarning do not Moderate Anchoring Effects. A Replication Study of Epley & Gilovich (Journal of Behavioral Decision Making, 2005; Psychological Science, 2006)

Published in Journal of Comments and Replications in Economics, 2024

Anchoring, the assimilation of numerical estimates toward previously considered numbers, has generally been separated into anchoring from self-generated anchors (e.g., people first thinking of 9 months when asked for the gestation period of an animal) and experimenter-provided anchors (e.g., experimenters letting participants spin fortune wheels). For some time, the two types of anchoring were believed to be explained by two different theoretical accounts. However, later research showed crossover between the accounts. What now remains are contradictions between past and recent findings, specifically, which moderators affect which type of anchoring. We conducted three replications (𝑁total = 657) of seminal studies on the distinction between self-generated and experimenter-provided anchoring effects where we investigated the moderators need for cognition, cognitive load, and forewarning. We found no evidence that either type of anchoring is moderated by any of the moderators. In line with recent replication efforts, we found that anchoring effects were robust, but the findings on moderators of anchoring effects should be treated with caution. Download paper here

Recommended citation: Röseler, L., Bögler, H. L., Koßmann, L., Krueger, S., Bickenbach, S., Bühler, R., della Guardia, J., Köppel, L.-M. A, Möhring, J., Ponader, S., Roßmaier, K., Sing, J. (2024). Need for Cognition, Cognitive Load, and Forewarning do not Moderate Anchoring Effects. A Replication Study of Epley & Gilovich (Journal of Behavioral Decision Making, 2005; Psychological Science, 2006). Journal of Comments and Replications in Economics, 3(2024-6). https://doi.org/10.18718/81781.38 https://www.jcr-econ.org/need-for-cognition-cognitive-load-and-forewarning-do-not-moderate-anchoring-effects-replication/

Investigating the Gestalt Principle of Closure in Deep Convolutional Neural Networks

Published in European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), 2024

Deep neural networks perform well in object recognition, but do they perceive objects like humans? This study investigates the Gestalt principle of closure in convolutional neural networks. We propose a protocol to identify closure and conduct experiments using simple visual stimuli with progressively removed edge sections. We evaluate well-known networks on their ability to classify incomplete polygons. Our findings reveal a performance degradation as the edge removal percentage increases, indicating that current models heavily rely on complete edge information for accurate classification. The data used in our study is available on GitHub {https://github.com/zhangyy708/closure-in-CNNs}. Download paper here

Recommended citation: Y. Zhang, D. Soydaner, F. Behrad, L. Koßmann, J. Wagemans (2024). Investigating the Gestalt Principle of Closure in Deep Convolutional Neural Networks, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), 9-11 October, Bruges, Belgium. https://www.esann.org/sites/default/files/proceedings/2024/ES2024-111.pdf

Finding Closure: A Closer Look at the Gestalt Law of Closure in Convolutional Neural Networks

Published in arXiv preprint, 2024

The human brain has an inherent ability to fill in gaps to perceive figures as complete wholes, even when parts are missing or fragmented. This phenomenon is known as Closure in psychology, one of the Gestalt laws of perceptual organization, explaining how the human brain interprets visual stimuli. Given the importance of Closure for human object recog- nition, we investigate whether neural networks rely on a similar mechanism. Exploring this crucial human visual skill in neural networks has the potential to highlight their comparability to humans. Recent studies have examined the Closure effect in neural networks. However, they typically focus on a limited selection of Convolutional Neural Networks (CNNs) and have not reached a consensus on their capability to perform Closure. To address these gaps, we present a systematic framework for investigating the Closure principle in neural networks. We introduce well-curated datasets designed to test for Closure effects, including both modal and amodal completion. We then conduct experiments on various CNNs employing different measurements. Our comprehensive analysis reveals that VGG16 and DenseNet-121 exhibit the Closure effect, while other CNNs show variable results. We interpret these findings by blending insights from psychology and neural network research, offering a unique perspective that enhances transparency in understanding neural networks. Our code and dataset will be made available on GitHub Download paper here

Recommended citation: Y. Zhang*, D. Soydaner*, L. Koßmann, F. Behrad, J. Wagemans (2024). Finding Closure: A Closer Look at the Gestalt Law of Closure in Convolutional Neural Nerworks, arXiv preprint arXiv: 2408.12460. https://arxiv.org/abs/2408.12460

A toolbox for calculating objective image properties in aesthetics research

Published in arXiv preprint, 2024

Over the past two decades, researchers in the field of visual aesthetics have studied numerous quantitative (objective) image properties and how they relate to visual aesthetic appreciation. However, results are difficult to compare between research groups. One reason is that researchers use different sets of image properties in their studies. But even if the same properties are used, the image pre-processing techniques may differ and often researchers use their own customized scripts to calculate the image properties. To provide greater accessibility and comparability of research results in visual experimental aesthetics, we developed an open-access and easy-to-use toolbox (called the ‘Aesthetics Toolbox’). The Toolbox allows users to calculate a well-defined set of quantitative image properties popular in contemporary research. The properties include lightness and color statistics, Fourier spectral properties, fractality, self-similarity, symmetry, as well as different entropy measures and CNN-based variances. Compatible with most devices, the Toolbox provides an intuitive click-and-drop web interface. In the Toolbox, we integrated the original scripts of four different research groups and translated them into Python 3. To ensure that results were consistent across analyses, we took care that results from the Python versions of the scripts were the same as those from the original scripts. The toolbox, detailed documentation, and a link to the cloud version are available via Github: this https URL. In summary, we developed a toolbox that helps to standardize and simplify the calculation of quantitative image properties for visual aesthetics research. Download paper here

Recommended citation: Redies, C., Bartho, R., Koßmann, L., Spehar, B., Hübner, R., Wagemans, J., & Hayn-Leichsenring, G. U. (2024). A toolbox for calculating objective image properties in aesthetics research. arXiv preprint arXiv:2408.10616.. https://arxiv.org/abs/2408.10616

Reconstructing a disambiguation sequence that forms perceptual memory of multistable displays via reverse correlation method: Bias onset perception but gently

Published in Journal of Vision, 2023

When multistable displays are presented intermittently with long blank intervals, their onset perception is determined by perceptual memory of multistable displays. We investigated when and how it is formed using a reverse correlation method and bistable kinetic depth effect displays. Each experimental block consisted of interleaved fully ambiguous probe and exogenously disambiguated prime displays. The purpose of the former was to “read out” the perceptual memory, whereas the latter contained purely random disambiguation sequences that were presented at the beginning of the prime display, throughout the entire presentation, or at the beginning and the end of the presentation. For each experiment and condition, we selected a subset of trials with disambiguation sequences that led to a change in perception of either the prime itself (sequences that modified perception) or the following fully ambiguous probe (sequences that modified perceptual memory). We estimated average disambiguation sequences for each participant using additive linear models. We found that an optimal sequence started at the onset with a moderate disambiguation against the previously dominant state (dominant perception for the previous probe) that gradually reduced until the display is fully ambiguous. We also show that the same sequence leads to an altered perception of the prime, indicating that perception and perceptual memory form at the same time. We suggest that perceptual memory is a consequence of an earlier evidence accumulation process and is informative about how the visual system treated ambiguity in the past rather than how it anticipates an uncertain future. Download paper here

Recommended citation: Pastukhov, A., Koßmann, L. & Carbon, C. (2023). Reconstructing a disambiguation sequence that forms perceptual memory of multistable displays via reverse correlation method: bias onset perception But gently. Journal of Vision, 23(3), 10. https://doi.org/10.1167/jov.23.3.10. https://jov.arvojournals.org/article.aspx?articleid=2785454

Perceptions of persons who wear face coverings are modulated by the perceivers’ attitude

Published in Frontiers in Neuroscience, 2022

We examined if the effect of facial coverings on person perception is influenced by the perceiver’s attitudes. We used two online experiments in which participants saw the same human target persons repeatedly appearing with and without a specific piece of clothing and had to judge the target persons’ character. In Experiment 1 (N = 101), we investigated how the wearing of a facial mask influences a person’s perception depending on the perceiver’s attitude toward measures against the COVID-19 pandemic. In Experiment 2 (N = 114), we examined the effect of wearing a head cover associated with Arabic culture on a person’s perception depending on the perceiver’s attitude toward Islam. Both studies were preregistered; both found evidence that a person’s perception is a process shaped by the personal attitudes of the perceiver as well as merely the target person’s outward appearance. Integrating previous findings, we demonstrate that facial covers, as well as head covers, operate as cues which are used by the perceivers to infer the target persons’ underlying attitudes. The judgment of the target person is shaped by the perceived attitude toward what the facial covering stereotypically symbolizes. Download paper here

Recommended citation: Leder, J., Koßmann, L. & Carbon, C. (2022). Perceptions of persons who wear face coverings are modulated by the perceivers’ attitude. Frontiers in Neuroscience, 16. https://doi.org/10.3389/fnins.2022.988546. https://www.frontiersin.org/articles/10.3389/fnins.2022.988546/full

Replicating Epley and Gilovich: Need for Cognition, Cognitive Load, and Forewarning do not Moderate Anchoring Effects

Published in PsyArXiv, 2022

Anchoring, the assimilation of numerical estimates toward previously considered numbers, has generally been separated into anchoring from self-generated anchors (e.g., people first thinking of 9 months when asked for the gestation period of an animal) and experimenter-provided anchors (e.g., experimenters letting participants spin fortune wheels). For some time, the two types of anchoring were believed to be explained by two different theoretical accounts. However, later research showed crossover between the accounts. What now remains are contradictions between past and recent findings, specifically, which moderators affect which type of anchoring. We conducted three replications (Ntotal = 653) of seminal studies on the distinction between self-generated and experimenter-provided anchoring effects where we investigated the moderators need for cognition, cognitive load, and forewarning. We found no evidence that either type of anchoring is moderated by any of the moderators. In line with recent replication efforts, we found that anchoring effects were robust, but the findings on moderators of anchoring effects should be treated with caution.

Recommended citation: Röseler, L., Bögler, H. L., Koßmann, L., Krueger, S., Bickenbach, S., Bühler, R., Guardia, J. d ., et al. (2022, April 13). Replicating Epley and Gilovich: Need for Cognition, Cognitive Load, and Forewarning do not Moderate Anchoring Effects. PsyArXiv. Retrieved from psyarxiv.com/bgp3m https://psyarxiv.com/bgp3m/

When perception is stronger than physics: Perceptual similarities rather than laws of physics govern the perception of interacting objects

Published in Attention, Perception & Psychophysics, 2021

When several multistable displays are viewed simultaneously, their perception is synchronized, as they tend to be in the same perceptual state. Here, we investigated the possibility that perception may reflect embedded statistical knowledge of physical interaction between objects for specific combinations of displays and layouts. We used a novel display with two ambiguously rotating gears and an ambiguous walker-on-a-ball display. Both stimuli produce a physically congruent perception when an interaction is possible (i.e., gears counterrotate, and the ball rolls under the walker’s feet). Next, we gradually manipulated the stimuli to either introduce abrupt changes to the potential physical interaction between objects or keep it constant despite changes in the visual stimulus. We characterized the data using four different models that assumed (1) independence of perception of the stimulus, (2) dependence on the stimulus’s properties, (3) dependence on physical configuration alone, and (4) an interaction between stimulus properties and a physical configuration. We observed that for the ambiguous gears, the perception was correlated with the stimulus changes rather than with the possibility of physical interaction. The perception of walker-on-a- ball was independent of the stimulus but depended instead on whether participants responded about a relative motion of two objects (perception was biased towards physically congruent motion) or the absolute motion of the walker alone (perception was independent of the rotation of the ball). None of the two experiments supported the idea of embedded knowledge of physical interaction.

Recommended citation: Pastukhov, A., Koßmann, L. & Carbon, CC. When perception is stronger than physics: Perceptual similarities rather than laws of physics govern the perception of interacting objects. Atten Percept Psychophys 84, 124–137 (2022). https://doi.org/10.3758/s13414-021-02383-1 https://link.springer.com/article/10.3758/s13414-021-02383-1