Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
Professional Homepage of Lisa Kossmann, MSc.
This is a page not in th emain menu
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
Two superimposed semitransparent orthogonally oriented faces produce perceptual rivalry with one face being clearly perceived at a time and perception continuously switching between them. We investigated whether perceptual dominance of an individual face is determined by high-level properties, such as gender, age, or emotion, or low-level properties. To this end, we used 20 female and 20 male faces, aged 20 to 25 years, from the Chicago Face Database. They were randomly paired using a round-robin tournament schedule (eight blocks, 20 trials each). Participants viewed a face pair and continuously indicated which face they currently perceive via key presses. We computed two measures of face dominance, (a) as a proportion of trials in which it was the first face perceived at onset and (b) as a proportion of time it was dominant throughout the trial. An exploratory data analysis using linear mixed models showed no systematic relationship between either of the two measures and high-level face descriptors, such as gender, age, or emotions (see https://osf.io/q2fjd). We conclude that in face rivalry, perceptual dominance is determined primarily by low-level features such as the size or relative width of the face, or salient local features such as birthmarks.
Published:
When several multistable displays are viewed at the same time, typically, the same perceptual state tends to be dominant for all stimuli, and perceptual switches tend to occur at the same time (so-called perceptual coupling). We investigated whether this can be altered by an opportunity for a physical interaction between objects. We used a well established version of the walker-on-a-ball display plus a novel display consisting of two rotating gears. In the default configuration, perception for both displays was congruent with physically interacting objects. We gradually altered displays to produce either an abrupt change to the potential interaction (e.g., moving objects away from each other) or to keep it constant despite the visual changes (disambiguating one of the objects). We fit four models that assumed 1) independence of perception of the stimuli, 2) dependence on the stimulus’s properties, 3) dependence on physical configuration alone, and 4) an interaction between stimulus properties and a physical configuration. For the gears display, the perception depended on the stimulus properties, as in perceptual coupling, rather than on the possibility of physical interaction. Regarding the walker-on-the-ball, the perception depended neither on stimulus nor on the possibility of physical interaction but on whether participants were asked to respond on the relative motion of both objects or the absolute motion of the walker alone. This suggests that perception of the walker-on-a-ball was driven primarily by expectations. The results reveal multiple perceptual mechanisms acting at various levels of processing, whereas priors of physical interaction had little influence.
Published:
When multistable displays are presented intermittently with a long blank interval, they become stabilized via perceptual memory. However, we still lack an understanding of perceptual memory’s role in daily vision, its mechanisms, and even the conditions that lead to its formation. Therefore, we used a reverse correlation method to recover a biasing sequence that forms perceptual memory. In the experiment, participants reported on the direction of rotation of intermittently presented (800 ms on, 1000 ms off) kinetic-depth effect displays. We interleaved fully ambiguous probes (to read out current state of perceptual memory) with biased primes (size/distance cues), randomly disambiguated during the first 300 ms with 10 different bias levels, each lasting 30 ms. We collected 10,000+ trials for three participants. First, we validated the method by computing an average sequence that produced an opposite perceptual dominance in primes: A moderate (50%) bias in favor of the suppressed percept that is gradually reduced to full ambiguity. Next, we repeated the analysis but computed average bias sequences that preceded the dominance change in the following probe. I.e., a prime biasing sequence that established a new perceptual memory. We found the same pattern but with a weaker initial bias. We also computed a bias sequence that reversed the probe (formed perceptual memory) but not the prime itself. Here, an even gentler bias was required. Additional data is necessary to confirm its consistency as such events were rare. In short, to form perceptual memory: gently bias onset perception itself. Open data and methods: https://osf.io/gw7kc/
Published:
The relationship between mirror symmetry and aesthetic appreciation has intrigued vision scientists, empirical esthetics researchers and artists alike, but concrete evidence remains somewhat elusive to this day. In this multidisciplinary project, we investigate human symmetry detection for 100 images of artworks and relate these behavioral data to aesthetic appreciation and computational metrics. Participants were asked to place a rectangular bounding box around an image region they perceived as mirror-symmetric and to indicate the axis of symmetry. They could place as many boxes as they saw fit. For each of them, they also rated the perceived saliency of the region (i.e., how much it popped-out from the background) and the strength of the symmetry (i.e., from rather imperfect to almost perfect symmetry). Statistical analysis of 2839 symmetries by 23 participants so far reveals that participants selected bigger regions of symmetry first and rated them higher on salience and strength of symmetry. Vertical axes of symmetry were most frequently indicated (around 80%). We used different metrics for image quality assessment to compute symmetry accuracy scores for the bounding boxes, revealing large discrepancies between participant ratings and objectively computed symmetry strength. These discrepancies between human and computational symmetry assessment emphasize the need to go beyond computer vision and employ deep learning models. Aesthetic liking of the images, rated by a different pool of observers, seems to be independent from both strength and saliency ratings (correlations <.1). This could be because mirror symmetry is only one aspect of good composition. Human data collection is still ongoing, including aesthetic judgements from the same participants. Additionally, we will train a deep learning model on symmetry detection and figure-ground segmentation, which we will present alongside these findings. Open data and methods: https://osf.io/9tf4e/ Acknowledgment: This work is funded by an ERC Advanced Grant (No. 101053925) awarded to JW.
Published:
Although mirror symmetry is an established and popular principle of perceptual organization, human symmetry detection in images of natural scenes remains highly understudied, when compared to symmetry detection in artificially created dot patterns and shapes. In this multidisciplinary project, we investigate human symmetry detection in 100 images of natural scenes in relation to quantitative metrics derived from computer vision and machine learning. In our study participants were asked to place a rectangular bounding box around an image region they perceived as mirror-symmetric and to indicate the axis of symmetry. They could place as many bounding boxes as they saw fit. For each of them, they also rated the perceived saliency of the region (i.e., how much it popped-out from the background) and the strength of the symmetry (i.e., from rather imperfect to almost perfect symmetry). Statistical analysis of 2173 symmetries by 17 participants so far reveals that participants selected bigger, more salient regions of symmetry first. Vertical axes were much more frequent (around 75%) than horizontal and oblique ones. Horizontally and vertically symmetric regions were found to be more salient and more symmetric than oblique ones. Saliency and strength ratings were moderately correlated (around 0.4) across all regions and images. We used different metrics for image quality assessment to compute symmetry accuracy scores for the bounding boxes, revealing large discrepancies between human and computational symmetry assessment (correlations below 0.1), both for saliency and strength. This emphasizes the need to go beyond traditional computer vision algorithms and employ deep learning models. Human data collection is still ongoing, and we also plan to train a deep learning model on symmetry detection and present it alongside these findings. Open data and methods: https://osf.io/9tf4e/ Acknowledgment: This work is funded by an ERC Advanced Grant (No. 101053925) awarded to JW.
Published:
While Composition has been at the center of philosophical aesthetics, art-theoretical discussions, and art education, it has received considerably less attention in empirical aesthetics. The same is true for the notion of Spatial Layout (hereafter “Layout”), which is more central in the literature on space and scene perception. With this large preregistered (https://osf.io/67mx5) online study of 160 artworks, we aim to provide a foundation for future work by investigating the relationship of important aesthetic measures, namely Pleasure, Interest, Order, and Complexity, with Composition and Layout. Our participants were randomly assigned either the Composition or the Layout condition, received definitions and examples of good and poor Composition or clear and unclear Layout, and then viewed 50 randomly selected artworks in two blocks. In the first block they rated them on either Pleasure or Interest, either Order or Complexity, and either Composition or Layout, using 7-point Likert Scales. In the second block they rated the same images again (new random order) on Composition or Layout and on the two remaining aesthetic concepts. Participants also filled out a standard demographics questionnaire, an art-experience questionnaire and scales from selected personality questionnaires for Openness to Experience, Need for Closure, Sensation Seeking and Aesthetic Sensitivity. First results (N=494) show that high ratings for Composition and Layout lead to higher Pleasure ratings. Ratings for Order are highly positively correlated with ratings for Composition and Layout, while ratings for Complexity are negatively correlated with ratings for Composition and Layout. Participants scoring high on Openness and Sensation Seeking require lower scores on Composition and Layout to give higher Pleasure ratings. Composition and Layout are correlated higher for representative artworks than for abstract artworks, and their relationship with other concepts seems to be slightly different. Data collection is still ongoing until we have ratings from 1280 participants. Acknowledgment: This work is funded by an ERC Advanced Grant (No. 101053925) awarded to JW.
Published:
Spatial Layout is an important concept in the literature on space and scene perception, while Composition is central in theoretical aesthetics, and art history, philosophy, and education. Both factors depend strongly on perceptual organization, and they could therefore have a similar impact on aesthetic appreciation. We aimed to reveal similarities and differences between both concepts in relation to aesthetic appreciation, by conducting an extensive online study (N=1300) with a diverse stimulus set of real-world images. We collected 7-point ratings for Spatial Layout, Composition, Order, Complexity, Pleasure, and Interest on 160 images of 80 manmade and 80 natural scenes. Participants rated either Composition or Spatial Layout of 40 images in two blocks, after they received a brief explanation with examples of good and bad composition, or clear and unclear spatial layout. In the first block they also rated either Pleasure or Interest, and either Order or Complexity. In the second block they rated the same images again on Composition or Spatial Layout and the remaining two concepts. Participants also completed questionnaires for basic demographics, art-experience, and personality. First analyses with Spearman correlations confirmed that Spatial Layout and Composition can be judged reliably (0.73 and 0.77, respectively) and were highly correlated (0.74), with some unexplained variance suggesting that they are not completely overlapping concepts. Composition was more relevant for Pleasure and Interest (0.77 and 0.70, respectively) than Spatial Layout (0.54 and 0.43, respectively). Order correlated more strongly with Spatial Layout (0.74) than with Composition (0.66), while for Complexity the correlations were both weak but had opposite signs (0.26 for Composition, -0.12 for Spatial Layout). Our findings indicate that Spatial Layout and Composition are related but distinct factors, with unique relationships to different dimensions of aesthetic appreciation. Future analyses of this dataset will provide insight into moderators of these relationships.
Published:
Short description of portfolio item number 1
Published:
Short description of portfolio item number 2
Published in Attention, Perception & Psychophysics, 2021
When several multistable displays are viewed simultaneously, their perception is synchronized, as they tend to be in the same perceptual state. Here, we investigated the possibility that perception may reflect embedded statistical knowledge of physical interaction between objects for specific combinations of displays and layouts. We used a novel display with two ambiguously rotating gears and an ambiguous walker-on-a-ball display. Both stimuli produce a physically congruent perception when an interaction is possible (i.e., gears counterrotate, and the ball rolls under the walker’s feet). Next, we gradually manipulated the stimuli to either introduce abrupt changes to the potential physical interaction between objects or keep it constant despite changes in the visual stimulus. We characterized the data using four different models that assumed (1) independence of perception of the stimulus, (2) dependence on the stimulus’s properties, (3) dependence on physical configuration alone, and (4) an interaction between stimulus properties and a physical configuration. We observed that for the ambiguous gears, the perception was correlated with the stimulus changes rather than with the possibility of physical interaction. The perception of walker-on-a- ball was independent of the stimulus but depended instead on whether participants responded about a relative motion of two objects (perception was biased towards physically congruent motion) or the absolute motion of the walker alone (perception was independent of the rotation of the ball). None of the two experiments supported the idea of embedded knowledge of physical interaction.
Recommended citation: Pastukhov, A., Koßmann, L. & Carbon, CC. When perception is stronger than physics: Perceptual similarities rather than laws of physics govern the perception of interacting objects. Atten Percept Psychophys 84, 124–137 (2022). https://doi.org/10.3758/s13414-021-02383-1 https://link.springer.com/article/10.3758/s13414-021-02383-1
Published in PsyArXiv, 2022
Anchoring, the assimilation of numerical estimates toward previously considered numbers, has generally been separated into anchoring from self-generated anchors (e.g., people first thinking of 9 months when asked for the gestation period of an animal) and experimenter-provided anchors (e.g., experimenters letting participants spin fortune wheels). For some time, the two types of anchoring were believed to be explained by two different theoretical accounts. However, later research showed crossover between the accounts. What now remains are contradictions between past and recent findings, specifically, which moderators affect which type of anchoring. We conducted three replications (Ntotal = 653) of seminal studies on the distinction between self-generated and experimenter-provided anchoring effects where we investigated the moderators need for cognition, cognitive load, and forewarning. We found no evidence that either type of anchoring is moderated by any of the moderators. In line with recent replication efforts, we found that anchoring effects were robust, but the findings on moderators of anchoring effects should be treated with caution.
Recommended citation: Röseler, L., Bögler, H. L., Koßmann, L., Krueger, S., Bickenbach, S., Bühler, R., Guardia, J. d ., et al. (2022, April 13). Replicating Epley and Gilovich: Need for Cognition, Cognitive Load, and Forewarning do not Moderate Anchoring Effects. PsyArXiv. Retrieved from psyarxiv.com/bgp3m https://psyarxiv.com/bgp3m/
Published in Frontiers in Neuroscience, 2022
We examined if the effect of facial coverings on person perception is influenced by the perceiver’s attitudes. We used two online experiments in which participants saw the same human target persons repeatedly appearing with and without a specific piece of clothing and had to judge the target persons’ character. In Experiment 1 (N = 101), we investigated how the wearing of a facial mask influences a person’s perception depending on the perceiver’s attitude toward measures against the COVID-19 pandemic. In Experiment 2 (N = 114), we examined the effect of wearing a head cover associated with Arabic culture on a person’s perception depending on the perceiver’s attitude toward Islam. Both studies were preregistered; both found evidence that a person’s perception is a process shaped by the personal attitudes of the perceiver as well as merely the target person’s outward appearance. Integrating previous findings, we demonstrate that facial covers, as well as head covers, operate as cues which are used by the perceivers to infer the target persons’ underlying attitudes. The judgment of the target person is shaped by the perceived attitude toward what the facial covering stereotypically symbolizes. Download paper here
Recommended citation: Leder, J., Koßmann, L. & Carbon, C. (2022). Perceptions of persons who wear face coverings are modulated by the perceivers’ attitude. Frontiers in Neuroscience, 16. https://doi.org/10.3389/fnins.2022.988546. https://www.frontiersin.org/articles/10.3389/fnins.2022.988546/full
Published in Journal of Vision, 2023
When multistable displays are presented intermittently with long blank intervals, their onset perception is determined by perceptual memory of multistable displays. We investigated when and how it is formed using a reverse correlation method and bistable kinetic depth effect displays. Each experimental block consisted of interleaved fully ambiguous probe and exogenously disambiguated prime displays. The purpose of the former was to “read out” the perceptual memory, whereas the latter contained purely random disambiguation sequences that were presented at the beginning of the prime display, throughout the entire presentation, or at the beginning and the end of the presentation. For each experiment and condition, we selected a subset of trials with disambiguation sequences that led to a change in perception of either the prime itself (sequences that modified perception) or the following fully ambiguous probe (sequences that modified perceptual memory). We estimated average disambiguation sequences for each participant using additive linear models. We found that an optimal sequence started at the onset with a moderate disambiguation against the previously dominant state (dominant perception for the previous probe) that gradually reduced until the display is fully ambiguous. We also show that the same sequence leads to an altered perception of the prime, indicating that perception and perceptual memory form at the same time. We suggest that perceptual memory is a consequence of an earlier evidence accumulation process and is informative about how the visual system treated ambiguity in the past rather than how it anticipates an uncertain future. Download paper here
Recommended citation: Pastukhov, A., Koßmann, L. & Carbon, C. (2023). Reconstructing a disambiguation sequence that forms perceptual memory of multistable displays via reverse correlation method: bias onset perception But gently. Journal of Vision, 23(3), 10. https://doi.org/10.1167/jov.23.3.10. https://jov.arvojournals.org/article.aspx?articleid=2785454
Published in arXiv preprint, 2024
Over the past two decades, researchers in the field of visual aesthetics have studied numerous quantitative (objective) image properties and how they relate to visual aesthetic appreciation. However, results are difficult to compare between research groups. One reason is that researchers use different sets of image properties in their studies. But even if the same properties are used, the image pre-processing techniques may differ and often researchers use their own customized scripts to calculate the image properties. To provide greater accessibility and comparability of research results in visual experimental aesthetics, we developed an open-access and easy-to-use toolbox (called the ‘Aesthetics Toolbox’). The Toolbox allows users to calculate a well-defined set of quantitative image properties popular in contemporary research. The properties include lightness and color statistics, Fourier spectral properties, fractality, self-similarity, symmetry, as well as different entropy measures and CNN-based variances. Compatible with most devices, the Toolbox provides an intuitive click-and-drop web interface. In the Toolbox, we integrated the original scripts of four different research groups and translated them into Python 3. To ensure that results were consistent across analyses, we took care that results from the Python versions of the scripts were the same as those from the original scripts. The toolbox, detailed documentation, and a link to the cloud version are available via Github: this https URL. In summary, we developed a toolbox that helps to standardize and simplify the calculation of quantitative image properties for visual aesthetics research. Download paper here
Recommended citation: Redies, C., Bartho, R., Koßmann, L., Spehar, B., Hübner, R., Wagemans, J., & Hayn-Leichsenring, G. U. (2024). A toolbox for calculating objective image properties in aesthetics research. arXiv preprint arXiv:2408.10616.. https://arxiv.org/abs/2408.10616
Published in arXiv preprint, 2024
The human brain has an inherent ability to fill in gaps to perceive figures as complete wholes, even when parts are missing or fragmented. This phenomenon is known as Closure in psychology, one of the Gestalt laws of perceptual organization, explaining how the human brain interprets visual stimuli. Given the importance of Closure for human object recog- nition, we investigate whether neural networks rely on a similar mechanism. Exploring this crucial human visual skill in neural networks has the potential to highlight their comparability to humans. Recent studies have examined the Closure effect in neural networks. However, they typically focus on a limited selection of Convolutional Neural Networks (CNNs) and have not reached a consensus on their capability to perform Closure. To address these gaps, we present a systematic framework for investigating the Closure principle in neural networks. We introduce well-curated datasets designed to test for Closure effects, including both modal and amodal completion. We then conduct experiments on various CNNs employing different measurements. Our comprehensive analysis reveals that VGG16 and DenseNet-121 exhibit the Closure effect, while other CNNs show variable results. We interpret these findings by blending insights from psychology and neural network research, offering a unique perspective that enhances transparency in understanding neural networks. Our code and dataset will be made available on GitHub Download paper here
Recommended citation: Y. Zhang*, D. Soydaner*, L. Koßmann, F. Behrad, J. Wagemans (2024). Finding Closure: A Closer Look at the Gestalt Law of Closure in Convolutional Neural Nerworks, arXiv preprint arXiv: 2408.12460. https://arxiv.org/abs/2408.12460
Published in European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), 2024
Deep neural networks perform well in object recognition, but do they perceive objects like humans? This study investigates the Gestalt principle of closure in convolutional neural networks. We propose a protocol to identify closure and conduct experiments using simple visual stimuli with progressively removed edge sections. We evaluate well-known networks on their ability to classify incomplete polygons. Our findings reveal a performance degradation as the edge removal percentage increases, indicating that current models heavily rely on complete edge information for accurate classification. The data used in our study is available on GitHub {https://github.com/zhangyy708/closure-in-CNNs}. Download paper here
Recommended citation: Y. Zhang, D. Soydaner, F. Behrad, L. Koßmann, J. Wagemans (2024). Investigating the Gestalt Principle of Closure in Deep Convolutional Neural Networks, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), 9-11 October, Bruges, Belgium. https://www.esann.org/sites/default/files/proceedings/2024/ES2024-111.pdf
Published in Journal of Comments and Replications in Economics, 2024
Anchoring, the assimilation of numerical estimates toward previously considered numbers, has generally been separated into anchoring from self-generated anchors (e.g., people first thinking of 9 months when asked for the gestation period of an animal) and experimenter-provided anchors (e.g., experimenters letting participants spin fortune wheels). For some time, the two types of anchoring were believed to be explained by two different theoretical accounts. However, later research showed crossover between the accounts. What now remains are contradictions between past and recent findings, specifically, which moderators affect which type of anchoring. We conducted three replications (𝑁total = 657) of seminal studies on the distinction between self-generated and experimenter-provided anchoring effects where we investigated the moderators need for cognition, cognitive load, and forewarning. We found no evidence that either type of anchoring is moderated by any of the moderators. In line with recent replication efforts, we found that anchoring effects were robust, but the findings on moderators of anchoring effects should be treated with caution. Download paper here
Recommended citation: Röseler, L., Bögler, H. L., Koßmann, L., Krueger, S., Bickenbach, S., Bühler, R., della Guardia, J., Köppel, L.-M. A, Möhring, J., Ponader, S., Roßmaier, K., Sing, J. (2024). Need for Cognition, Cognitive Load, and Forewarning do not Moderate Anchoring Effects. A Replication Study of Epley & Gilovich (Journal of Behavioral Decision Making, 2005; Psychological Science, 2006). Journal of Comments and Replications in Economics, 3(2024-6). https://doi.org/10.18718/81781.38 https://www.jcr-econ.org/need-for-cognition-cognitive-load-and-forewarning-do-not-moderate-anchoring-effects-replication/
Published in Behavior Research Methods, 2025
Over the past two decades, researchers in the field of visual aesthetics have studied numerous quantitative (objective) image properties and how they relate to visual aesthetic appreciation. However, results are difficult to compare between research groups. One reason is that researchers use different sets of image properties in their studies. However, even if the same properties are used, the image pre-processing techniques may differ, and researchers often use their own customized scripts to calculate the image properties. To provide better accessibility and comparability of research results in visual experimental aesthetics, we developed an open-access and easy-to-use toolbox called Aesthetics Toolbox. The Toolbox allows users to calculate a well-defined set of quantitative image properties popular in contemporary research. The properties include image dimensions, lightness and color statistics, complexity, symmetry, balance, Fourier spectrum properties, fractal dimension, self-similarity, as well as entropy measures and CNN-based variances. Compatible with most devices, the Toolbox provides an intuitive click-and-drop web interface. In the Toolbox, we integrated the original scripts of four different research groups and translated them into Python 3. To ensure that results were consistent across analyses, we took care that results from the Python versions of the scripts were the same as those from the original scripts. The toolbox, detailed documentation, and a link to the cloud version are available via GitHub: https://github.com/RBartho/Aesthetics-Toolbox. In summary, we developed a toolbox that helps to standardize and simplify the calculation of quantitative image properties for visual aesthetics research. Download paper here
Recommended citation: Redies, C., Bartho, R., Koßmann, L., Spehar, B., Hübner, R., Wagemans, J., & Hayn-Leichsenring, G. U. (2025). A toolbox for calculating quantitative image properties in aesthetics research. Behavior Research Methods, 57(4). https://doi.org/10.3758/s13428-025-02632-3 https://link.springer.com/article/10.3758/s13428-025-02632-3
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.