Conceptual Challenges in Coordinating Theoretical and Data-centered Estimates of Probability

2011-01-01
Konold, Cliff
Madden, Sandra
Pollatsek, Alexander
Pfannkuch, Maxine
Wild, Chris
Ziedins, Ilze
Finzer, William
Horton, Nicholas J.
Kazak, Sibel
A core component of informal statistical inference is the recognition that judgments based on sample data are inherently uncertain. This implies that instruction aimed at developing informal inference needs to foster basic probabilistic reasoning. In this article, we analyze and critique the now-common practice of introducing students to both "theoretical" and "experimental" probability, typically with the hope that students will come to see the latter as converging on the former as the number of observations grows. On the surface of it, this approach would seem to fit well with objectives in teaching informal inference. However, our in-depth analysis of one eighth-grader's reasoning about experimental and theoretical probabilities points to various pitfalls in this approach. We offer tentative recommendations about how some of these issues might be addressed.
MATHEMATICAL THINKING AND LEARNING

Suggestions

Fuzzy classification models based on tanaka’s fuzzy linear regression approach and nonparametric improved fuzzy classifier functions
Özer, Gizem; Köksal, Gülser; Department of Industrial Engineering (2009)
In some classification problems where human judgments, qualitative and imprecise data exist, uncertainty comes from fuzziness rather than randomness. Limited number of fuzzy classification approaches is available for use for these classification problems to capture the effect of fuzzy uncertainty imbedded in data. The scope of this study mainly comprises two parts: new fuzzy classification approaches based on Tanaka’s Fuzzy Linear Regression (FLR) approach, and an improvement of an existing one, Improved Fu...
Asymmetric Confidence Interval with Box-Cox Transformation in R
Dağ, Osman; İlk Dağ, Özlem (null; 2017-12-08)
Normal distribution is important in statistical literature since most of the statistical methods are based on normal distribution such as t-test, analysis of variance and regression analysis. However, it is difficult to satisfy the normality assumption for real life datasets. Box–Cox power transformation is the most well-known and commonly utilized remedy [2]. The algorithm relies on a single transformation parameter. In the original article [2], maximum likelihood estimation was proposed for the estimation...
Assessment of software process and metrics to support quantitative understanding
TARHAN, AYÇA; Demirörs, Onur (2007-11-07)
The use of process metrics and data for quantitative understanding is not very straightforward. If we have an identification of process components and follow a measurement process, we are likely to use process metrics and data effectively. But if we don't have these practices, we can hardly trust oil process metrics and data for quantitative understanding. In this paper, we summarize eight case studies that we performed in different industrial contexts. The case studies rely on an assessment approach that i...
Implementation of different algorithms in linear mixed models: case studies with TIMSS
Koca, Burcu; Gökalp Yavuz, Fulya; Department of Statistics (2021-9-06)
Mixed models are frequently used in longitudinal data types with time repetition over the same subject and clustered data types formed by observations gathered around certain groups. The modeling technique which models the dependency structure between repetitions and observations in the same cluster is required to use algorithms for parameter estimations. The same model can be solved with various algorithms arising from setup, inference and approach differences. In this study, several algorithms used for LM...
An approach to the mean shift outlier model by Tikhonov regularization and conic programming
TAYLAN, PAKİZE; Yerlikaya-Oezkurt, Fatma; Weber, Gerhard Wilhelm (IOS Press, 2014-01-01)
In statistical research, regression models based on data play a central role; one of these models is the linear regression model. However, this model may give misleading results when data contain outliers. The outliers in linear regression can be resolved in two stages: by using the Mean Shift Outlier Model (MSOM) and by providing a new solution for this model. First, we construct a Tikhonov regularization problem for the MSOM. Then, we treat this problem using convex optimization techniques, specifically c...
Citation Formats
C. Konold et al., “Conceptual Challenges in Coordinating Theoretical and Data-centered Estimates of Probability,” MATHEMATICAL THINKING AND LEARNING, vol. 13, no. 1-2, pp. 68–86, 2011, Accessed: 00, 2022. [Online]. Available: https://hdl.handle.net/11511/100748.