Many category learning experiments use supervised learning (i.e., trial-by-trial feedback). Most of those procedures use deterministic feedback, teaching participants to classify exemplars into consistent categories (i.e., the stimulus i is always classified in category k). Though some researchers suggest that natural learning conditions are more likely to be inconsistent, the literature using probabilistic feedback in category learning experiments is sparse. Our analysis of the literature suggests that part of the reason for this sparsity is a relative lack of flexibility of current paradigms and procedures for designing probabilistic feedback experiments. The work we report here offers a novel paradigm (the Probabilistic Prototype Distortion task) which allows researchers greater flexibility when creating experiments with different p(category|feature) probabilities, and also allows parametrically manipulating the amount of randomness in an experimental task. In the current work, we offer a detailed procedure, implementation, experimental results and discussion of this novel procedure. Our results suggest that by designing experiments with our procedures, the experimental setup allows subjects to achieve the desired classification performance.
|Número de páginas||7|
|Estado||Publicada - 2021|
|Publicado de forma externa||Sí|
|Evento||43rd Annual Meeting of the Cognitive Science Society: Comparative Cognition: Animal Minds, CogSci 2021 - Virtual, Online, Austria|
Duración: 26 jul. 2021 → 29 jul. 2021
|Conferencia||43rd Annual Meeting of the Cognitive Science Society: Comparative Cognition: Animal Minds, CogSci 2021|
|Período||26/07/21 → 29/07/21|