Epstein Files Full PDF

CLICK HERE
Technopedia Center
PMB University Brochure
Faculty of Engineering and Computer Science
S1 Informatics S1 Information Systems S1 Information Technology S1 Computer Engineering S1 Electrical Engineering S1 Civil Engineering

faculty of Economics and Business
S1 Management S1 Accountancy

Faculty of Letters and Educational Sciences
S1 English literature S1 English language education S1 Mathematics education S1 Sports Education
teknopedia

  • Registerasi
  • Brosur UTI
  • Kip Scholarship Information
  • Performance
Flag Counter
  1. World Encyclopedia
  2. Model selection - Wikipedia
Model selection - Wikipedia
From Wikipedia, the free encyclopedia
Task of selecting a statistical model from a set of candidate models
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
This article includes a list of general references, but it lacks sufficient corresponding inline citations. Please help to improve this article by introducing more precise citations. (September 2016) (Learn how and when to remove this message)
icon
This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Model selection" – news · newspapers · books · scholar · JSTOR
(February 2010) (Learn how and when to remove this message)
(Learn how and when to remove this message)

Model selection is the task of selecting a model from among various candidates on the basis of performance criterion to choose the best one.[1] In the context of machine learning and more generally statistical analysis, this may be the selection of a statistical model from a set of candidate models, given data. In the simplest cases, a pre-existing set of data is considered. However, the task can also involve the design of experiments such that the data collected is well-suited to the problem of model selection. Given candidate models of similar predictive or explanatory power, the simplest model is most likely to be the best choice (Occam's razor).

Konishi & Kitagawa (2008, p. 75) state, "The majority of the problems in statistical inference can be considered to be problems related to statistical modeling". Relatedly, Cox (2006, p. 197) has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis".

Model selection may also refer to the problem of selecting a few representative models from a large set of computational models for the purpose of decision making or optimization under uncertainty.[2]

In machine learning, algorithmic approaches to model selection include feature selection, hyperparameter optimization, and statistical learning theory.

Introduction

[edit]
The scientific observation cycle

In its most basic forms, model selection is one of the fundamental tasks of scientific inquiry. Determining the principle that explains a series of observations is often linked directly to a mathematical model predicting those observations. For example, when Galileo performed his inclined plane experiments, he demonstrated that the motion of the balls fitted the parabola predicted by his model [citation needed].

Of the countless number of possible mechanisms and processes that could have produced the data, how can one even begin to choose the best model? The mathematical approach commonly taken decides among a set of candidate models; this set must be chosen by the researcher. Often simple models such as polynomials are used, at least initially [citation needed]. Burnham & Anderson (2002) emphasize throughout their book the importance of choosing models based on sound scientific principles, such as understanding of the phenomenological processes or mechanisms (e.g., chemical reactions) underlying the data.

Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity. More complex models will be better able to adapt their shape to fit the data (for example, a fifth-order polynomial can exactly fit six points), but the additional parameters may not represent anything useful. (Perhaps those six points are really just randomly distributed about a straight line.) Goodness of fit is generally determined using a likelihood ratio approach, or an approximation of this, leading to a chi-squared test. The complexity is generally measured by counting the number of parameters in the model.

Model selection techniques can be considered as estimators of some physical quantity, such as the probability of the model producing the given data. The bias and variance are both important measures of the quality of this estimator; efficiency is also often considered.

A standard example of model selection is that of curve fitting, where, given a set of points and other background knowledge (e.g. points are a result of i.i.d. samples), we must select a curve that describes the function that generated the points.

Two directions of model selection

[edit]

There are two main objectives in inference and learning from data. One is for scientific discovery, also called statistical inference, understanding of the underlying data-generating mechanism and interpretation of the nature of the data. Another objective of learning from data is for predicting future or unseen observations, also called Statistical Prediction. In the second objective, the data scientist does not necessarily concern an accurate probabilistic description of the data. Of course, one may also be interested in both directions.

In line with the two different objectives, model selection can also have two directions: model selection for inference and model selection for prediction.[3] The first direction is to identify the best model for the data, which will preferably provide a reliable characterization of the sources of uncertainty for scientific interpretation. For this goal, it is significantly important that the selected model is not too sensitive to the sample size. Accordingly, an appropriate notion for evaluating model selection is the selection consistency, meaning that the most robust candidate will be consistently selected given sufficiently many data samples.

The second direction is to choose a model as machinery to offer excellent predictive performance. For the latter, however, the selected model may simply be the lucky winner among a few close competitors, yet the predictive performance can still be the best possible. If so, the model selection is fine for the second goal (prediction), but the use of the selected model for insight and interpretation may be severely unreliable and misleading.[3] Moreover, for very complex models selected this way, even predictions may be unreasonable for data only slightly different from those on which the selection was made.[4]

Methods to assist in choosing the set of candidate models

[edit]
  • Data transformation (statistics)
  • Exploratory data analysis
  • Model specification
  • Scientific method

Criteria

[edit]

Below is a list of criteria for model selection. The most commonly used information criteria are (i) the Akaike information criterion and (ii) the Bayes factor and/or the Bayesian information criterion (which to some extent approximates the Bayes factor), see Stoica & Selen (2004) for a review.

  • Akaike information criterion (AIC), a measure of the goodness fit of an estimated statistical model
  • Bayes factor
  • Bayesian information criterion (BIC), also known as the Schwarz information criterion, a statistical criterion for model selection
  • Bridge criterion (BC), a statistical criterion that can attain the better performance of AIC and BIC despite the appropriateness of model specification.[5]
  • Cross-validation
  • Deviance information criterion (DIC), another Bayesian oriented model selection criterion
  • False discovery rate
  • Focused information criterion (FIC), a selection criterion sorting statistical models by their effectiveness for a given focus parameter
  • Hannan–Quinn information criterion, an alternative to the Akaike and Bayesian criteria
  • Kashyap information criterion (KIC) is a powerful alternative to AIC and BIC, because KIC uses Fisher information matrix
  • Likelihood-ratio test
  • Mallows's Cp
  • Minimum description length
  • Minimum message length (MML)
  • PRESS statistic, also known as the PRESS criterion
  • Structural risk minimization
  • Stepwise regression
  • Watanabe–Akaike information criterion (WAIC), also called the widely applicable information criterion
  • Extended Bayesian Information Criterion (EBIC) is an extension of ordinary Bayesian information criterion (BIC) for models with high parameter spaces.
  • Extended Fisher Information Criterion (EFIC) is a model selection criterion for linear regression models.
  • Constrained Minimum Criterion (CMC) is a frequentist method for regression model selection based on the following geometric observations. In the parameter vector space of the full model, every vector represents a model. There exists a ball centered on the true parameter vector of the full model in which the true model is the smallest model (in L 0 {\displaystyle L_{0}} {\displaystyle L_{0}} norm). As the sample size goes to infinity, the MLE for the true parameter vector converges to and thus pulls the shrinking likelihood ratio confidence region to the true parameter vector. The confidence region will be inside the ball with probability tending to one. The CMC selects the smallest model in this region. When the region captures the true parameter vector, the CMC selection is the true model. Hence, the probability that the CMC selection is the true model is greater than or equal to the confidence level.[6]

Among these criteria, cross-validation is typically the most accurate, and computationally the most expensive, for supervised learning problems.[citation needed]

Burnham & Anderson (2002, §6.3) say the following:

There is a variety of model selection methods. However, from the point of view of statistical performance of a method, and intended context of its use, there are only two distinct classes of methods: These have been labeled efficient and consistent. (...) Under the frequentist paradigm for model selection one generally has three main approaches: (I) optimization of some selection criteria, (II) tests of hypotheses, and (III) ad hoc methods.

See also

[edit]
  • All models are wrong
  • Analysis of competing hypotheses
  • Automated machine learning (AutoML)
  • Bias-variance dilemma
  • Feature selection
  • Freedman's paradox
  • Grid search
  • Identifiability Analysis
  • Log-linear analysis
  • Model identification
  • Occam's razor
  • Optimal design
  • Parameter identification problem
  • Scientific modelling
  • Statistical model validation
  • Stein's paradox

Notes

[edit]
  1. ^ Hastie, Tibshirani, Friedman (2009). The elements of statistical learning. Springer. p. 195.{{cite book}}: CS1 maint: multiple names: authors list (link)
  2. ^ Shirangi, Mehrdad G.; Durlofsky, Louis J. (2016). "A general method to select representative models for decision making and optimization under uncertainty". Computers & Geosciences. 96: 109–123. Bibcode:2016CG.....96..109S. doi:10.1016/j.cageo.2016.08.002.
  3. ^ a b Ding, Jie; Tarokh, Vahid; Yang, Yuhong (2018). "Model Selection Techniques: An Overview". IEEE Signal Processing Magazine. 35 (6): 16–34. arXiv:1810.09583. Bibcode:2018ISPM...35f..16D. doi:10.1109/MSP.2018.2867638. ISSN 1053-5888. S2CID 53035396.
  4. ^ Su, J.; Vargas, D.V.; Sakurai, K. (2019). "One Pixel Attack for Fooling Deep Neural Networks". IEEE Transactions on Evolutionary Computation. 23 (5): 828–841. arXiv:1710.08864. Bibcode:2019ITEC...23..828S. doi:10.1109/TEVC.2019.2890858. S2CID 2698863.
  5. ^ Ding, J.; Tarokh, V.; Yang, Y. (June 2018). "Bridging AIC and BIC: A New Criterion for Autoregression". IEEE Transactions on Information Theory. 64 (6): 4024–4043. arXiv:1508.02473. Bibcode:2018ITIT...64.4024D. doi:10.1109/TIT.2017.2717599. ISSN 1557-9654. S2CID 5189440.
  6. ^ Tsao, Min (2023). "Regression model selection via log-likelihood ratio and constrained minimum criterion". Canadian Journal of Statistics. 52: 195–211. arXiv:2107.08529. doi:10.1002/cjs.11756. S2CID 236087375.

References

[edit]
  • Aho, K.; Derryberry, D.; Peterson, T. (2014), "Model selection for ecologists: the worldviews of AIC and BIC", Ecology, 95 (3): 631–636, Bibcode:2014Ecol...95..631A, doi:10.1890/13-1452.1, PMID 24804445
  • Akaike, H. (1994), "Implications of informational point of view on the development of statistical science", in Bozdogan, H. (ed.), Proceedings of the First US/JAPAN Conference on The Frontiers of Statistical Modeling: An Informational Approach—Volume 3, Kluwer Academic Publishers, pp. 27–38
  • Anderson, D.R. (2008), Model Based Inference in the Life Sciences, Springer, ISBN 9780387740751
  • Ando, T. (2010), Bayesian Model Selection and Statistical Modeling, CRC Press, ISBN 9781439836156
  • Breiman, L. (2001), "Statistical modeling: the two cultures", Statistical Science, 16: 199–231, doi:10.1214/ss/1009213726
  • Burnham, K.P.; Anderson, D.R. (2002), Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach (2nd ed.), Springer-Verlag, ISBN 0-387-95364-7 [this has over 38000 citations on Google Scholar]
  • Chamberlin, T.C. (1890), "The method of multiple working hypotheses", Science, 15 (366): 92–6, Bibcode:1890Sci....15R..92., doi:10.1126/science.ns-15.366.92, PMID 17782687 (reprinted 1965, Science 148: 754–759 [1] doi:10.1126/science.148.3671.754)
  • Claeskens, G. (2016), "Statistical model choice" (PDF), Annual Review of Statistics and Its Application, 3 (1): 233–256, Bibcode:2016AnRSA...3..233C, doi:10.1146/annurev-statistics-041715-033413[permanent dead link]
  • Claeskens, G.; Hjort, N.L. (2008), Model Selection and Model Averaging, Cambridge University Press, ISBN 9781139471800
  • Cox, D.R. (2006), Principles of Statistical Inference, Cambridge University Press
  • Ding, J.; Tarokh, V.; Yang, Y. (2018), "Model Selection Techniques - An Overview", IEEE Signal Processing Magazine, 35 (6): 16–34, arXiv:1810.09583, Bibcode:2018ISPM...35f..16D, doi:10.1109/MSP.2018.2867638, S2CID 53035396
  • Kashyap, R.L. (1982), "Optimal choice of AR and MA parts in autoregressive moving average models", IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-4 (2), IEEE: 99–104, doi:10.1109/TPAMI.1982.4767213, PMID 21869012, S2CID 18484243
  • Konishi, S.; Kitagawa, G. (2008), Information Criteria and Statistical Modeling, Springer, Bibcode:2007icsm.book.....K, ISBN 9780387718866
  • Lahiri, P. (2001), Model Selection, Institute of Mathematical Statistics
  • Leeb, H.; Pötscher, B. M. (2009), "Model Selection", in Anderson, T. G. (ed.), Handbook of Financial Time Series, Springer, pp. 889–925, doi:10.1007/978-3-540-71297-8_39, ISBN 978-3-540-71296-1
  • Lukacs, P. M.; Thompson, W. L.; Kendall, W. L.; Gould, W. R.; Doherty, P. F. Jr.; Burnham, K. P.; Anderson, D. R. (2007), "Concerns regarding a call for pluralism of information theory and hypothesis testing", Journal of Applied Ecology, 44 (2): 456–460, Bibcode:2007JApEc..44..456L, doi:10.1111/j.1365-2664.2006.01267.x, S2CID 83816981
  • McQuarrie, Allan D. R.; Tsai, Chih-Ling (1998), Regression and Time Series Model Selection, Singapore: World Scientific, ISBN 981-02-3242-X
  • Massart, P. (2007), Concentration Inequalities and Model Selection, Springer
  • Massart, P. (2014), "A non-asymptotic walk in probability and statistics", in Lin, Xihong (ed.), Past, Present, and Future of Statistical Science, Chapman & Hall, pp. 309–321, ISBN 9781482204988
  • Navarro, D. J. (2019), "Between the Devil and the Deep Blue Sea: Tensions between scientific judgement and statistical model selection", Computational Brain & Behavior, 2: 28–34, doi:10.1007/s42113-018-0019-z, hdl:1959.4/unsworks_64247
  • Resende, Paulo Angelo Alves; Dorea, Chang Chung Yu (2016), "Model identification using the Efficient Determination Criterion", Journal of Multivariate Analysis, 150: 229–244, arXiv:1409.7441, doi:10.1016/j.jmva.2016.06.002, S2CID 5469654
  • Shmueli, G. (2010), "To explain or to predict?", Statistical Science, 25 (3): 289–310, arXiv:1101.0891, doi:10.1214/10-STS330, MR 2791669, S2CID 15900983
  • Stoica, P.; Selen, Y. (2004), "Model-order selection: a review of information criterion rules" (PDF), IEEE Signal Processing Magazine, 21 (4): 36–47, Bibcode:2004ISPM...21...36S, doi:10.1109/MSP.2004.1311138, S2CID 17338979
  • Wit, E.; van den Heuvel, E.; Romeijn, J.-W. (2012), "'All models are wrong...': an introduction to model uncertainty" (PDF), Statistica Neerlandica, 66 (3): 217–236, doi:10.1111/j.1467-9574.2012.00530.x, S2CID 7793470
  • Wit, E.; McCullagh, P. (2001), Viana, M. A. G.; Richards, D. St. P. (eds.), "The extendibility of statistical models", Algebraic Methods in Statistics and Probability, pp. 327–340
  • Wójtowicz, Anna; Bigaj, Tomasz (2016), "Justification, confirmation, and the problem of mutually exclusive hypotheses", in Kuźniar, Adrian; Odrowąż-Sypniewska, Joanna (eds.), Uncovering Facts and Values, Brill Publishers, pp. 122–143, doi:10.1163/9789004312654_009, ISBN 9789004312654
  • Owrang, Arash; Jansson, Magnus (2018), "A Model Selection Criterion for High-Dimensional Linear Regression", IEEE Transactions on Signal Processing , 66 (13): 3436–3446, Bibcode:2018ITSP...66.3436O, doi:10.1109/TSP.2018.2821628, ISSN 1941-0476, S2CID 46931136
  • B. Gohain, Prakash; Jansson, Magnus (2022), "Scale-Invariant and consistent Bayesian information criterion for order selection in linear regression models", Signal Processing, 196 108499, Bibcode:2022SigPr.19608499G, doi:10.1016/j.sigpro.2022.108499, ISSN 0165-1684, S2CID 246759677
  • v
  • t
  • e
Statistics
  • Outline
  • Index
Descriptive statistics
Continuous data
Center
  • Mean
    • Arithmetic
    • Arithmetic-Geometric
    • Contraharmonic
    • Cubic
    • Generalized/power
    • Geometric
    • Harmonic
    • Heronian
    • Heinz
    • Lehmer
  • Median
  • Mode
Dispersion
  • Average absolute deviation
  • Coefficient of variation
  • Interquartile range
  • Percentile
  • Range
  • Standard deviation
  • Variance
Shape
  • Central limit theorem
  • Moments
    • Kurtosis
    • L-moments
    • Skewness
Count data
  • Index of dispersion
Summary tables
  • Contingency table
  • Frequency distribution
  • Grouped data
Dependence
  • Partial correlation
  • Pearson product-moment correlation
  • Rank correlation
    • Kendall's τ
    • Spearman's ρ
  • Scatter plot
Graphics
  • Bar chart
  • Biplot
  • Box plot
  • Control chart
  • Correlogram
  • Fan chart
  • Forest plot
  • Histogram
  • Pie chart
  • Q–Q plot
  • Radar chart
  • Run chart
  • Scatter plot
  • Stem-and-leaf display
  • Violin plot
Data collection
Study design
  • Effect size
  • Missing data
  • Optimal design
  • Population
  • Replication
  • Sample size determination
  • Statistic
  • Statistical power
Survey methodology
  • Sampling
    • Cluster
    • Stratified
  • Opinion poll
  • Questionnaire
  • Standard error
Controlled experiments
  • Blocking
  • Factorial experiment
  • Interaction
  • Random assignment
  • Randomized controlled trial
  • Randomized experiment
  • Scientific control
Adaptive designs
  • Adaptive clinical trial
  • Stochastic approximation
  • Up-and-down designs
Observational studies
  • Cohort study
  • Cross-sectional study
  • Natural experiment
  • Quasi-experiment
Statistical inference
Statistical theory
  • Population
  • Statistic
  • Probability distribution
  • Sampling distribution
    • Order statistic
  • Empirical distribution
    • Density estimation
  • Statistical model
    • Model specification
    • Lp space
  • Parameter
    • location
    • scale
    • shape
  • Parametric family
    • Likelihood (monotone)
    • Location–scale family
    • Exponential family
  • Completeness
  • Sufficiency
  • Statistical functional
    • Bootstrap
    • U
    • V
  • Optimal decision
    • loss function
  • Efficiency
  • Statistical distance
    • divergence
  • Asymptotics
  • Robustness
Frequentist inference
Point estimation
  • Estimating equations
    • Maximum likelihood
    • Method of moments
    • M-estimator
    • Minimum distance
  • Unbiased estimators
    • Mean-unbiased minimum-variance
      • Rao–Blackwellization
      • Lehmann–Scheffé theorem
    • Median unbiased
  • Plug-in
Interval estimation
  • Confidence interval
  • Pivot
  • Likelihood interval
  • Prediction interval
  • Tolerance interval
  • Resampling
    • Bootstrap
    • Jackknife
Testing hypotheses
  • 1- & 2-tails
  • Power
    • Uniformly most powerful test
  • Permutation test
    • Randomization test
  • Multiple comparisons
Parametric tests
  • Likelihood-ratio
  • Score/Lagrange multiplier
  • Wald
Specific tests
  • Z-test (normal)
  • Student's t-test
  • F-test
Goodness of fit
  • Chi-squared
  • G-test
  • Kolmogorov–Smirnov
  • Anderson–Darling
  • Lilliefors
  • Jarque–Bera
  • Normality (Shapiro–Wilk)
  • Likelihood-ratio test
  • Model selection
    • Cross validation
    • AIC
    • BIC
Rank statistics
  • Sign
    • Sample median
  • Signed rank (Wilcoxon)
    • Hodges–Lehmann estimator
  • Rank sum (Mann–Whitney)
  • Nonparametric anova
    • 1-way (Kruskal–Wallis)
    • 2-way (Friedman)
    • Ordered alternative (Jonckheere–Terpstra)
  • Van der Waerden test
Bayesian inference
  • Bayesian probability
    • prior
    • posterior
  • Credible interval
  • Bayes factor
  • Bayesian estimator
    • Maximum posterior estimator
  • Correlation
  • Regression analysis
Correlation
  • Pearson product-moment
  • Partial correlation
  • Confounding variable
  • Coefficient of determination
Regression analysis
  • Errors and residuals
  • Regression validation
  • Mixed effects models
  • Simultaneous equations models
  • Multivariate adaptive regression splines (MARS)
  • Template:Least squares and regression analysis
Linear regression
  • Simple linear regression
  • Ordinary least squares
  • General linear model
  • Bayesian regression
Non-standard predictors
  • Nonlinear regression
  • Nonparametric
  • Semiparametric
  • Isotonic
  • Robust
  • Homoscedasticity and Heteroscedasticity
Generalized linear model
  • Exponential families
  • Logistic (Bernoulli) / Binomial / Poisson regressions
Partition of variance
  • Analysis of variance (ANOVA, anova)
  • Analysis of covariance
  • Multivariate ANOVA
  • Degrees of freedom
Categorical / multivariate / time-series / survival analysis
Categorical
  • Cohen's kappa
  • Contingency table
  • Graphical model
  • Log-linear model
  • McNemar's test
  • Cochran–Mantel–Haenszel statistics
Multivariate
  • Regression
  • Manova
  • Principal components
  • Canonical correlation
  • Discriminant analysis
  • Cluster analysis
  • Classification
  • Structural equation model
    • Factor analysis
  • Multivariate distributions
    • Elliptical distributions
      • Normal
Time-series
General
  • Decomposition
  • Trend
  • Stationarity
  • Seasonal adjustment
  • Exponential smoothing
  • Cointegration
  • Structural break
  • Granger causality
Specific tests
  • Dickey–Fuller
  • Johansen
  • Q-statistic (Ljung–Box)
  • Durbin–Watson
  • Breusch–Godfrey
Time domain
  • Autocorrelation (ACF)
    • partial (PACF)
  • Cross-correlation (XCF)
  • ARMA model
  • ARIMA model (Box–Jenkins)
  • Autoregressive conditional heteroskedasticity (ARCH)
  • Vector autoregression (VAR) (Autoregressive model (AR))
Frequency domain
  • Spectral density estimation
  • Fourier analysis
  • Least-squares spectral analysis
  • Wavelet
  • Whittle likelihood
Survival
Survival function
  • Kaplan–Meier estimator (product limit)
  • Proportional hazards models
  • Accelerated failure time (AFT) model
  • First hitting time
Hazard function
  • Nelson–Aalen estimator
Test
  • Log-rank test
Applications
Biostatistics
  • Bioinformatics
  • Clinical trials / studies
  • Epidemiology
  • Medical statistics
Engineering statistics
  • Chemometrics
  • Methods engineering
  • Probabilistic design
  • Process / quality control
  • Reliability
  • System identification
Social statistics
  • Actuarial science
  • Census
  • Crime statistics
  • Demography
  • Econometrics
  • Jurimetrics
  • National accounts
  • Official statistics
  • Population statistics
  • Psychometrics
Spatial statistics
  • Cartography
  • Environmental statistics
  • Geographic information system
  • Geostatistics
  • Kriging
  • Category
  • icon Mathematics portal
  • Commons
  • WikiProject
  • v
  • t
  • e
Least squares and regression analysis
Computational statistics
  • Least squares
  • Linear least squares
  • Non-linear least squares
  • Iteratively reweighted least squares
Correlation and dependence
  • Pearson product-moment correlation
  • Rank correlation (Spearman's rho
  • Kendall's tau)
  • Partial correlation
  • Confounding variable
Regression analysis
  • Ordinary least squares
  • Partial least squares
  • Total least squares
  • Ridge regression
Regression as a
statistical model
Linear regression
  • Simple linear regression
  • Ordinary least squares
  • Generalized least squares
  • Weighted least squares
  • General linear model
Predictor structure
  • Polynomial regression
  • Growth curve (statistics)
  • Segmented regression
  • Local regression
Non-standard
  • Nonlinear regression
  • Nonparametric
  • Semiparametric
  • Robust
  • Quantile
  • Isotonic
Non-normal errors
  • Generalized linear model
  • Binomial
  • Poisson
  • Logistic
Decomposition of variance
  • Analysis of variance
  • Analysis of covariance
  • Multivariate AOV
Model exploration
  • Stepwise regression
  • Model selection
    • Mallows's Cp
    • AIC
    • BIC
  • Model specification
  • Regression validation
Background
  • Mean and predicted response
  • Gauss–Markov theorem
  • Errors and residuals
  • Goodness of fit
  • Studentized residual
  • Minimum mean-square error
  • Frisch–Waugh–Lovell theorem
Design of experiments
  • Response surface methodology
  • Optimal design
  • Bayesian design
Numerical approximation
  • Numerical analysis
  • Approximation theory
  • Numerical integration
  • Gaussian quadrature
  • Orthogonal polynomials
  • Chebyshev polynomials
  • Chebyshev nodes
Applications
  • Curve fitting
  • Calibration curve
  • Numerical smoothing and differentiation
  • System identification
  • Moving least squares
  • Regression analysis category
  • Statistics category
  • icon Mathematics portal
  • Statistics outline
  • Statistics topics
Retrieved from "https://teknopedia.ac.id/w/index.php?title=Model_selection&oldid=1339083487"
Categories:
  • Model selection
  • Regression variable selection
  • Mathematical and quantitative methods (economics)
  • Management science
Hidden categories:
  • CS1 maint: multiple names: authors list
  • Articles with short description
  • Short description is different from Wikidata
  • Articles lacking in-text citations from September 2016
  • All articles lacking in-text citations
  • Articles needing additional references from February 2010
  • All articles needing additional references
  • Articles with multiple maintenance issues
  • All articles with unsourced statements
  • Articles with unsourced statements from September 2017
  • Articles with unsourced statements from May 2021
  • All articles with dead external links
  • Articles with dead external links from April 2020
  • Articles with permanently dead external links
  • CS1: long volume value

  • indonesia
  • Polski
  • العربية
  • Deutsch
  • English
  • Español
  • Français
  • Italiano
  • مصرى
  • Nederlands
  • 日本語
  • Português
  • Sinugboanong Binisaya
  • Svenska
  • Українська
  • Tiếng Việt
  • Winaray
  • 中文
  • Русский
Sunting pranala
url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url
Pusat Layanan

UNIVERSITAS TEKNOKRAT INDONESIA | ASEAN's Best Private University
Jl. ZA. Pagar Alam No.9 -11, Labuhan Ratu, Kec. Kedaton, Kota Bandar Lampung, Lampung 35132
Phone: (0721) 702022
Email: pmb@teknokrat.ac.id