Document Type



Inductive categorical arguments can differ greatly in the degree to which they are perceived as being strong or reasonable. The proposed hypothesis-assessment model views the process of estimating argument strength as a form of hypothesis-assessment in which accessible superordinate categories are evaluated for their consistency with the given argument categories. A pair of experiments was conducted to test the model's ability to predict the perceived strength of specific-conclusion arguments {e.g., (All Robins Have Substance X All Ostriches Have Substance X}). In Experiment 1, an argument-assessment group rated the strength of a diverse set of arguments, and a hypothesis-construction group generated superordinate hypotheses in response to the categories comprising these arguments. An unbiased estimate of the degree to which argument categories fit (i.e., qualified as members of) participant-generated superordinates accounted for 88% of the variability in perceived argument strength. In Experiment 2, participants rated the strength of a set of single-premise arguments subsequent to completing a separate task involving superordinate categories. Half of these superordinates subsumed both the premise and conclusion categories comprising one of the subsequently presented arguments, and half subsumed either the premise or conclusion category of one of these arguments, but not both. Arguments were rated significantly higher when the primed superordinate subsumed both of the argument categories. These results are supportive of the general view of categorical induction as hypothesis-assessment and, more specifically, of the hypothesized role superordinate accessibility plays in determining argument strength.