Epstein Files Full PDF

CLICK HERE
Technopedia Center
PMB University Brochure
Faculty of Engineering and Computer Science
S1 Informatics S1 Information Systems S1 Information Technology S1 Computer Engineering S1 Electrical Engineering S1 Civil Engineering

faculty of Economics and Business
S1 Management S1 Accountancy

Faculty of Letters and Educational Sciences
S1 English literature S1 English language education S1 Mathematics education S1 Sports Education
teknopedia

  • Registerasi
  • Brosur UTI
  • Kip Scholarship Information
  • Performance
Flag Counter
  1. World Encyclopedia
  2. Generalized extreme value distribution - Wikipedia
Generalized extreme value distribution - Wikipedia
From Wikipedia, the free encyclopedia
Family of probability distributions
icon
This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Generalized extreme value distribution" – news · newspapers · books · scholar · JSTOR
(May 2020) (Learn how and when to remove this message)
Notation GEV ( μ , σ , ξ ) {\displaystyle {\textrm {GEV}}(\mu ,\sigma ,\xi )} {\displaystyle {\textrm {GEV}}(\mu ,\sigma ,\xi )}
Parameters μ ∈ R {\displaystyle \mu \in \mathbb {R} } {\displaystyle \mu \in \mathbb {R} } (location)
σ > 0 {\displaystyle \sigma >0} {\displaystyle \sigma >0} (scale)
ξ ∈ R {\displaystyle \xi \in \mathbb {R} } {\displaystyle \xi \in \mathbb {R} } (shape)
Support x ∈ [ μ − σ ξ , + ∞ ) {\displaystyle x\in {\big [}\mu -{\tfrac {\sigma }{\xi }},+\infty {\big )}} {\displaystyle x\in {\big [}\mu -{\tfrac {\sigma }{\xi }},+\infty {\big )}} when ξ > 0 {\displaystyle \xi >0} {\displaystyle \xi >0}
x ∈ ( − ∞ , ∞ ) {\displaystyle x\in (-\infty ,\infty )} {\displaystyle x\in (-\infty ,\infty )} when ξ = 0 {\displaystyle \xi =0} {\displaystyle \xi =0}
x ∈ ( − ∞ , μ − σ ξ ] {\displaystyle x\in {\big (}-\infty ,\mu -{\tfrac {\sigma }{\xi }}{\big ]}} {\displaystyle x\in {\big (}-\infty ,\mu -{\tfrac {\sigma }{\xi }}{\big ]}} when ξ < 0 {\displaystyle \xi <0} {\displaystyle \xi <0}
PDF 1 σ t ( x ) ξ + 1 e − t ( x ) {\displaystyle {\tfrac {1}{\sigma }}\,t(x)^{\xi +1}\,\mathrm {e} ^{-t(x)}} {\displaystyle {\tfrac {1}{\sigma }}\,t(x)^{\xi +1}\,\mathrm {e} ^{-t(x)}}
where t ( x ) = { [ 1 + ξ ( x − μ σ ) ] − 1 / ξ if  ξ ≠ 0 exp ⁡ ( − x − μ σ ) if  ξ = 0 {\displaystyle t(x)={\begin{cases}\left[1+\xi \left({\tfrac {x-\mu }{\sigma }}\right)\right]^{-1/\xi }&{\text{if }}\xi \neq 0\\\exp \left(-{\tfrac {x-\mu }{\sigma }}\right)&{\text{if }}\xi =0\end{cases}}} {\displaystyle t(x)={\begin{cases}\left[1+\xi \left({\tfrac {x-\mu }{\sigma }}\right)\right]^{-1/\xi }&{\text{if }}\xi \neq 0\\\exp \left(-{\tfrac {x-\mu }{\sigma }}\right)&{\text{if }}\xi =0\end{cases}}}
CDF e − t ( x ) {\displaystyle \mathrm {e} ^{-t(x)}} {\displaystyle \mathrm {e} ^{-t(x)}} for x {\displaystyle x} {\displaystyle x} in the support (see above)
Mean { μ + σ ( g 1 − 1 ) ξ if  ξ ≠ 0 , ξ < 1 μ + σ γ if  ξ = 0 ∞ if  ξ ≥ 1 {\displaystyle {\begin{cases}\mu +{\dfrac {\sigma (g_{1}-1)}{\xi }}&{\text{if }}\xi \neq 0,\xi <1\\\mu +\sigma \gamma &{\text{if }}\xi =0\\\infty &{\text{if }}\xi \geq 1\end{cases}}} {\displaystyle {\begin{cases}\mu +{\dfrac {\sigma (g_{1}-1)}{\xi }}&{\text{if }}\xi \neq 0,\xi <1\\\mu +\sigma \gamma &{\text{if }}\xi =0\\\infty &{\text{if }}\xi \geq 1\end{cases}}}
where g k = Γ ( 1 − k ξ ) {\displaystyle g_{k}=\Gamma (1-k\xi )} {\displaystyle g_{k}=\Gamma (1-k\xi )} (see Gamma function)
and γ {\displaystyle \gamma } {\displaystyle \gamma } is Euler’s constant
Median { μ + σ ( ln ⁡ 2 ) − ξ − 1 ξ if  ξ ≠ 0 μ − σ ln ⁡ ln ⁡ 2 if  ξ = 0 {\displaystyle {\begin{cases}\mu +\sigma \,{\dfrac {(\ln 2)^{-\xi }-1}{\xi }}&{\text{if }}\xi \neq 0\\\mu -\sigma \ln \ln 2&{\text{if }}\xi =0\end{cases}}} {\displaystyle {\begin{cases}\mu +\sigma \,{\dfrac {(\ln 2)^{-\xi }-1}{\xi }}&{\text{if }}\xi \neq 0\\\mu -\sigma \ln \ln 2&{\text{if }}\xi =0\end{cases}}}
Mode { μ + σ ( 1 + ξ ) − ξ − 1 ξ if  ξ ≠ 0 μ if  ξ = 0 {\displaystyle {\begin{cases}\mu +\sigma \,{\dfrac {(1+\xi )^{-\xi }-1}{\xi }}&{\text{if }}\xi \neq 0\\\mu &{\text{if }}\xi =0\end{cases}}} {\displaystyle {\begin{cases}\mu +\sigma \,{\dfrac {(1+\xi )^{-\xi }-1}{\xi }}&{\text{if }}\xi \neq 0\\\mu &{\text{if }}\xi =0\end{cases}}}
Variance { σ 2 g 2 − g 1 2 ξ 2 if  ξ ≠ 0  and  ξ < 1 2 σ 2 π 2 6 if  ξ = 0 ∞ if  ξ ≥ 1 2 {\displaystyle {\begin{cases}\sigma ^{2}\,{\dfrac {g_{2}-g_{1}^{2}}{\xi ^{2}}}&{\text{if }}\xi \neq 0{\text{ and }}\xi <{\tfrac {1}{2}}\\\sigma ^{2}\,{\frac {\pi ^{2}}{6}}&{\text{if }}\xi =0\\\infty &{\text{if }}\xi \geq {\tfrac {1}{2}}\end{cases}}} {\displaystyle {\begin{cases}\sigma ^{2}\,{\dfrac {g_{2}-g_{1}^{2}}{\xi ^{2}}}&{\text{if }}\xi \neq 0{\text{ and }}\xi <{\tfrac {1}{2}}\\\sigma ^{2}\,{\frac {\pi ^{2}}{6}}&{\text{if }}\xi =0\\\infty &{\text{if }}\xi \geq {\tfrac {1}{2}}\end{cases}}}
Skewness { sgn ⁡ ( ξ ) g 3 − 3 g 2 g 1 + 2 g 1 3 ( g 2 − g 1 2 ) 3 / 2 if  ξ ≠ 0  and  ξ < 1 3 12 6 ζ ( 3 ) π 3 if  ξ = 0 {\displaystyle {\begin{cases}\operatorname {sgn}(\xi )\,{\dfrac {g_{3}-3g_{2}g_{1}+2g_{1}^{3}}{(g_{2}-g_{1}^{2})^{3/2}}}&{\text{if }}\xi \neq 0{\text{ and }}\xi <{\tfrac {1}{3}}\\{\dfrac {12{\sqrt {6}}\,\zeta (3)}{\pi ^{3}}}&{\text{if }}\xi =0\end{cases}}} {\displaystyle {\begin{cases}\operatorname {sgn} (\xi )\,{\dfrac {g_{3}-3g_{2}g_{1}+2g_{1}^{3}}{(g_{2}-g_{1}^{2})^{3/2}}}&{\text{if }}\xi \neq 0{\text{ and }}\xi <{\tfrac {1}{3}}\\{\dfrac {12{\sqrt {6}}\,\zeta (3)}{\pi ^{3}}}&{\text{if }}\xi =0\end{cases}}}
where sgn ⁡ ( x ) {\displaystyle \operatorname {sgn}(x)} {\displaystyle \operatorname {sgn} (x)} is the sign function
and ζ ( x ) {\displaystyle \zeta (x)} {\displaystyle \zeta (x)} is the Riemann zeta function
Excess kurtosis { g 4 − 4 g 3 g 1 + 6 g 1 2 g 2 − 3 g 1 4 ( g 2 − g 1 2 ) 2 if  ξ ≠ 0  and  ξ < 1 4 12 5 if  ξ = 0 {\displaystyle {\begin{cases}{\dfrac {g_{4}-4g_{3}g_{1}+6g_{1}^{2}g_{2}-3g_{1}^{4}}{(g_{2}-g_{1}^{2})^{2}}}&{\text{if }}\xi \neq 0{\text{ and }}\xi <{\tfrac {1}{4}}\\{\tfrac {12}{5}}&{\text{if }}\xi =0\end{cases}}} {\displaystyle {\begin{cases}{\dfrac {g_{4}-4g_{3}g_{1}+6g_{1}^{2}g_{2}-3g_{1}^{4}}{(g_{2}-g_{1}^{2})^{2}}}&{\text{if }}\xi \neq 0{\text{ and }}\xi <{\tfrac {1}{4}}\\{\tfrac {12}{5}}&{\text{if }}\xi =0\end{cases}}}
Entropy ln ⁡ ( σ ) + γ ξ + γ + 1 {\displaystyle \ln(\sigma )+\gamma \xi +\gamma +1} {\displaystyle \ln(\sigma )+\gamma \xi +\gamma +1}
MGF see Muraleedharan, Guedes Soares & Lucas (2011)[1]
CF see Muraleedharan, Guedes Soares & Lucas (2011)[1]

In probability theory and statistics, the generalized extreme value (GEV) distribution[2] is a family of continuous probability distributions developed within extreme value theory to combine the Gumbel, Fréchet and Weibull families also known as type I, II and III extreme value distributions. By the extreme value theorem the GEV distribution is the only possible limit distribution of properly normalized maxima of a sequence of independent and identically distributed random variables.[3] Note that a limit distribution needs to exist, which requires regularity conditions on the tail of the distribution. Despite this, the GEV distribution is often used as an approximation to model the maxima of long (finite) sequences of random variables.

In some fields of application the generalized extreme value distribution is known as the Fisher–Tippett distribution, named after R.A. Fisher and L.H.C. Tippett who recognised three different forms outlined below. However usage of this name is sometimes restricted to mean the special case of the Gumbel distribution. The origin of the common functional form for all three distributions dates back to at least Jenkinson (1955),[4] though allegedly[3] it could also have been given by von Mises (1936).[5]

Specification

[edit]

Using the standardized variable s = x − μ σ {\displaystyle s={\tfrac {x-\mu }{\sigma }}} {\displaystyle s={\tfrac {x-\mu }{\sigma }}}, where μ {\displaystyle \mu } {\displaystyle \mu }, the location parameter, can be any real number, and σ > 0 {\displaystyle \sigma >0} {\displaystyle \sigma >0} is the scale parameter; the cumulative distribution function of the GEV distribution is then

F ( s ; ξ ) = { exp ⁡ ( − e − s ) for  ξ = 0 , exp ⁡ ( − ( 1 + ξ s ) − 1 / ξ ) for  ξ ≠ 0  and  ξ s > − 1 , 0 for  ξ > 0  and  s ≤ − 1 ξ , 1 for  ξ < 0  and  s ≥ 1 | ξ | , {\displaystyle F(s;\xi )={\begin{cases}\exp(-\mathrm {e} ^{-s})&{\text{for }}\xi =0,\\\exp {\bigl (}\!-(1+\xi s)^{-1/\xi }{\bigr )}&{\text{for }}\xi \neq 0{\text{ and }}\xi s>-1,\\0&{\text{for }}\xi >0{\text{ and }}s\leq -{\tfrac {1}{\xi }},\\1&{\text{for }}\xi <0{\text{ and }}s\geq {\tfrac {1}{|\xi |}},\end{cases}}} {\displaystyle F(s;\xi )={\begin{cases}\exp(-\mathrm {e} ^{-s})&{\text{for }}\xi =0,\\\exp {\bigl (}\!-(1+\xi s)^{-1/\xi }{\bigr )}&{\text{for }}\xi \neq 0{\text{ and }}\xi s>-1,\\0&{\text{for }}\xi >0{\text{ and }}s\leq -{\tfrac {1}{\xi }},\\1&{\text{for }}\xi <0{\text{ and }}s\geq {\tfrac {1}{|\xi |}},\end{cases}}}

where ξ {\displaystyle \xi } {\displaystyle \xi }, the shape parameter, can be any real number. Thus, for ξ > 0 {\displaystyle \xi >0} {\displaystyle \xi >0}, the expression is valid for s > − 1 ξ {\displaystyle s>-{\tfrac {1}{\xi }}} {\displaystyle s>-{\tfrac {1}{\xi }}}, while for ξ < 0 {\displaystyle \xi <0} {\displaystyle \xi <0} it is valid for s < − 1 ξ {\displaystyle s<-{\tfrac {1}{\xi }}} {\displaystyle s<-{\tfrac {1}{\xi }}}. In the first case, − 1 ξ {\displaystyle -{\tfrac {1}{\xi }}} {\displaystyle -{\tfrac {1}{\xi }}} is the negative, lower end-point, where F {\displaystyle F} {\displaystyle F} is 0; in the second case, − 1 ξ {\displaystyle -{\tfrac {1}{\xi }}} {\displaystyle -{\tfrac {1}{\xi }}} is the positive, upper end-point, where F {\displaystyle F} {\displaystyle F} is 1. For ξ = 0 {\displaystyle \xi =0} {\displaystyle \xi =0}, the second expression is formally undefined and is replaced with the first expression, which is the result of taking the limit of the second, as ξ → 0 {\displaystyle \xi \to 0} {\displaystyle \xi \to 0} in which case s {\displaystyle s} {\displaystyle s} can be any real number.

In the special case of x = μ {\displaystyle x=\mu } {\displaystyle x=\mu }, we have s = 0 {\displaystyle s=0} {\displaystyle s=0}, so F ( 0 ; ξ ) = e − 1 ≈ 0.368 {\displaystyle F(0;\xi )=\mathrm {e} ^{-1}\approx 0.368} {\displaystyle F(0;\xi )=\mathrm {e} ^{-1}\approx 0.368} regardless of the values of ξ {\displaystyle \xi } {\displaystyle \xi } and σ {\displaystyle \sigma } {\displaystyle \sigma }.

The probability density function of the standardized distribution is

f ( s ; ξ ) = { e − s exp ⁡ ( − e − s ) for  ξ = 0 , ( 1 + ξ s ) − ( 1 + 1 / ξ ) exp ⁡ ( − ( 1 + ξ s ) − 1 / ξ ) for  ξ ≠ 0  and  ξ s > − 1 , 0 otherwise; {\displaystyle f(s;\xi )={\begin{cases}\mathrm {e} ^{-s}\exp(-\mathrm {e} ^{-s})&{\text{for }}\xi =0,\\(1+\xi s)^{-(1+1/\xi )}\exp {\bigl (}\!-(1+\xi s)^{-1/\xi }{\bigr )}&{\text{for }}\xi \neq 0{\text{ and }}\xi s>-1,\\0&{\text{otherwise;}}\end{cases}}} {\displaystyle f(s;\xi )={\begin{cases}\mathrm {e} ^{-s}\exp(-\mathrm {e} ^{-s})&{\text{for }}\xi =0,\\(1+\xi s)^{-(1+1/\xi )}\exp {\bigl (}\!-(1+\xi s)^{-1/\xi }{\bigr )}&{\text{for }}\xi \neq 0{\text{ and }}\xi s>-1,\\0&{\text{otherwise;}}\end{cases}}}

again valid for s > − 1 ξ {\displaystyle s>-{\tfrac {1}{\xi }}} {\displaystyle s>-{\tfrac {1}{\xi }}} in the case ξ > 0 {\displaystyle \xi >0} {\displaystyle \xi >0}, and for s < − 1 ξ {\displaystyle s<-{\tfrac {1}{\xi }}} {\displaystyle s<-{\tfrac {1}{\xi }}} in the case ξ < 0 {\displaystyle \xi <0} {\displaystyle \xi <0}. The density is zero outside of the relevant range. In the case ξ = 0 {\displaystyle \xi =0} {\displaystyle \xi =0}, the density is positive on the whole real line.

Since the cumulative distribution function is invertible, the quantile function for the GEV distribution has an explicit expression, namely

Q ( p ; μ , σ , ξ ) = { μ − σ ln ⁡ ( − ln ⁡ p ) for  ξ = 0  and  p ∈ ( 0 , 1 ) , μ + σ ξ ( ( − ln ⁡ p ) − ξ − 1 ) for  ξ > 0  and  p ∈ [ 0 , 1 ) ,  or  ξ < 0  and  p ∈ ( 0 , 1 ] ; {\displaystyle Q(p;\mu ,\sigma ,\xi )={\begin{cases}\mu -\sigma \ln(-\ln p)&{\text{for }}\xi =0{\text{ and }}p\in (0,1),\\\mu +{\dfrac {\sigma }{\xi }}{\big (}(-\ln p)^{-\xi }-1{\big )}&{\text{for }}\xi >0{\text{ and }}p\in [0,1),{\text{ or }}\xi <0{\text{ and }}p\in (0,1];\end{cases}}} {\displaystyle Q(p;\mu ,\sigma ,\xi )={\begin{cases}\mu -\sigma \ln(-\ln p)&{\text{for }}\xi =0{\text{ and }}p\in (0,1),\\\mu +{\dfrac {\sigma }{\xi }}{\big (}(-\ln p)^{-\xi }-1{\big )}&{\text{for }}\xi >0{\text{ and }}p\in [0,1),{\text{ or }}\xi <0{\text{ and }}p\in (0,1];\end{cases}}}

and therefore the quantile density function q = d Q d p {\displaystyle q={\tfrac {\mathrm {d} Q}{\mathrm {d} p}}} {\displaystyle q={\tfrac {\mathrm {d} Q}{\mathrm {d} p}}} is

q ( p ; σ , ξ ) = σ ( − ln ⁡ p ) ξ + 1 p for  p ∈ ( 0 , 1 ) , {\displaystyle q(p;\sigma ,\xi )={\frac {\sigma }{(-\ln p)^{\xi +1}\,p}}\qquad {\text{for }}p\in (0,1),} {\displaystyle q(p;\sigma ,\xi )={\frac {\sigma }{(-\ln p)^{\xi +1}\,p}}\qquad {\text{for }}p\in (0,1),}

valid for σ > 0 {\displaystyle \sigma >0} {\displaystyle \sigma >0} and for any real ξ {\displaystyle \xi } {\displaystyle \xi }.

Example of probability density functions for distributions of the GEV family. [6]

Summary statistics

[edit]

Using g k ≡ Γ ( 1 − k ξ ) {\displaystyle g_{k}\equiv \Gamma (1-k\xi )} {\displaystyle g_{k}\equiv \Gamma (1-k\xi )} for   k ∈ { 1 , 2 , 3 , 4 }   , {\displaystyle ~k\in \{1,2,3,4\}\ ,} {\displaystyle ~k\in \{1,2,3,4\}\ ,} where Γ ( ⋅ ) {\displaystyle \Gamma (\cdot )} {\displaystyle \Gamma (\cdot )} is the gamma function, some simple statistics of the distribution are given by:[citation needed]

E ⁡ ( X ) = μ + ( g 1 − 1 ) σ ξ {\displaystyle \operatorname {\mathbb {E} } (X)=\mu +(g_{1}-1)\,{\frac {\sigma }{\xi }}\quad } {\displaystyle \operatorname {\mathbb {E} } (X)=\mu +(g_{1}-1)\,{\frac {\sigma }{\xi }}\quad } for ξ < 1   , {\displaystyle \xi <1\ ,} {\displaystyle \xi <1\ ,}
Var ⁡ ( X ) = ( g 2 − g 1 2 ) σ 2 ξ 2   , {\displaystyle \operatorname {Var} (X)=(g_{2}-g_{1}^{2})\,{\frac {\sigma ^{2}}{\xi ^{2}}}\ ,} {\displaystyle \operatorname {Var} (X)=(g_{2}-g_{1}^{2})\,{\frac {\sigma ^{2}}{\xi ^{2}}}\ ,}
Mode ⁡ ( X ) = μ + ( ( 1 + ξ ) − ξ − 1 ) σ ξ   . {\displaystyle \operatorname {Mode} (X)=\mu +{\bigl (}(1+\xi )^{-\xi }-1{\bigr )}\,{\frac {\sigma }{\xi }}~.} {\displaystyle \operatorname {Mode} (X)=\mu +{\bigl (}(1+\xi )^{-\xi }-1{\bigr )}\,{\frac {\sigma }{\xi }}~.}

The skewness is

  skewness ⁡ ( X ) = { g 3 − 3 g 2 g 1 + 2 g 1 3 ( g 2 − g 1 2 ) 3 / 2 ⋅ sgn ⁡ ( ξ ) ξ ≠ 0   , 12 6 ζ ( 3 ) π 3 ≈ 1.14 ξ = 0   . {\displaystyle \ \operatorname {skewness} (X)={\begin{cases}{\dfrac {g_{3}-3g_{2}g_{1}+2g_{1}^{3}}{(g_{2}-g_{1}^{2})^{3/2}}}\cdot \operatorname {sgn}(\xi )&\xi \neq 0\ ,\\{\dfrac {12{\sqrt {6}}\,\zeta (3)}{\pi ^{3}}}\approx 1.14&\xi =0~.\end{cases}}} {\displaystyle \ \operatorname {skewness} (X)={\begin{cases}{\dfrac {g_{3}-3g_{2}g_{1}+2g_{1}^{3}}{(g_{2}-g_{1}^{2})^{3/2}}}\cdot \operatorname {sgn} (\xi )&\xi \neq 0\ ,\\{\dfrac {12{\sqrt {6}}\,\zeta (3)}{\pi ^{3}}}\approx 1.14&\xi =0~.\end{cases}}}

The excess kurtosis is:

  k u r t o s i s   e x c e s s ⁡ ( X ) = g 4 − 4 g 3 g 1 + 6 g 2 g 1 2 − 3 g 1 4 ( g 2 − g 1 2 ) 2 − 3   . {\displaystyle \ \operatorname {kurtosis\ excess} (X)={\frac {g_{4}-4g_{3}g_{1}+6g_{2}g_{1}^{2}-3g_{1}^{4}}{(g_{2}-g_{1}^{2})^{2}}}-3~.} {\displaystyle \ \operatorname {kurtosis\ excess} (X)={\frac {g_{4}-4g_{3}g_{1}+6g_{2}g_{1}^{2}-3g_{1}^{4}}{(g_{2}-g_{1}^{2})^{2}}}-3~.}


Link to Fréchet, Weibull, and Gumbel families

[edit]

The shape parameter   ξ   {\displaystyle \ \xi \ } {\displaystyle \ \xi \ } governs the tail behavior of the distribution. The sub-families defined by three cases:   ξ = 0   , {\displaystyle \ \xi =0\ ,} {\displaystyle \ \xi =0\ ,}   ξ > 0   , {\displaystyle \ \xi >0\ ,} {\displaystyle \ \xi >0\ ,} and   ξ < 0   ; {\displaystyle \ \xi <0\ ;} {\displaystyle \ \xi <0\ ;} these correspond, respectively, to the Gumbel, Fréchet, and Weibull families, whose cumulative distribution functions are displayed below.

  • Type I or Gumbel extreme value distribution, case   ξ = 0   , {\displaystyle ~\xi =0\ ,\quad } {\displaystyle ~\xi =0\ ,\quad } for all x ∈ (   − ∞   ,   + ∞   )   : {\displaystyle \quad x\in {\Bigl (}\ -\infty \ ,\ +\infty \ {\Bigr )}\ :} {\displaystyle \quad x\in {\Bigl (}\ -\infty \ ,\ +\infty \ {\Bigr )}\ :}
F (   x ;   μ ,   σ ,   0   ) = exp ⁡ ( − exp ⁡ ( −   x − μ   σ ) )   . {\displaystyle F(\ x;\ \mu ,\ \sigma ,\ 0\ )=\exp \left(-\exp \left(-{\frac {\ x-\mu \ }{\sigma }}\right)\right)~.} {\displaystyle F(\ x;\ \mu ,\ \sigma ,\ 0\ )=\exp \left(-\exp \left(-{\frac {\ x-\mu \ }{\sigma }}\right)\right)~.}
  • Type II or Fréchet extreme value distribution, case   ξ > 0   , {\displaystyle ~\xi >0\ ,\quad } {\displaystyle ~\xi >0\ ,\quad } for all x ∈ (   μ − σ   ξ     ,   + ∞   )   : {\displaystyle \quad x\in \left(\ \mu -{\tfrac {\sigma }{\ \xi \ }}\ ,\ +\infty \ \right)\ :} {\displaystyle \quad x\in \left(\ \mu -{\tfrac {\sigma }{\ \xi \ }}\ ,\ +\infty \ \right)\ :}
Let α ≡   1   ξ > 0 {\displaystyle \quad \alpha \equiv {\tfrac {\ 1\ }{\xi }}>0\quad } {\displaystyle \quad \alpha \equiv {\tfrac {\ 1\ }{\xi }}>0\quad } and y ≡ 1 + ξ σ ( x − μ )   ; {\displaystyle \quad y\equiv 1+{\tfrac {\xi }{\sigma }}(x-\mu )\ ;} {\displaystyle \quad y\equiv 1+{\tfrac {\xi }{\sigma }}(x-\mu )\ ;}
F (   x ;   μ ,   σ ,   ξ   ) = { 0 y ≤ 0   o r   e q u i v .   x ≤ μ − σ   ξ   exp ⁡ ( − 1   y α   ) y > 0   o r   e q u i v .   x > μ − σ   ξ     . {\displaystyle F(\ x;\ \mu ,\ \sigma ,\ \xi \ )={\begin{cases}0&y\leq 0\quad {\mathsf {~or\ equiv.~}}\quad x\leq \mu -{\tfrac {\sigma }{\ \xi \ }}\\\exp \left(-{\frac {1}{~y^{\alpha }\ }}\right)&y>0\quad {\mathsf {~or\ equiv.~}}\quad x>\mu -{\tfrac {\sigma }{\ \xi \ }}~.\end{cases}}} {\displaystyle F(\ x;\ \mu ,\ \sigma ,\ \xi \ )={\begin{cases}0&y\leq 0\quad {\mathsf {~or\ equiv.~}}\quad x\leq \mu -{\tfrac {\sigma }{\ \xi \ }}\\\exp \left(-{\frac {1}{~y^{\alpha }\ }}\right)&y>0\quad {\mathsf {~or\ equiv.~}}\quad x>\mu -{\tfrac {\sigma }{\ \xi \ }}~.\end{cases}}}
  • Type III or reversed Weibull extreme value distribution, case   ξ < 0   , {\displaystyle ~\xi <0\ ,\quad } {\displaystyle ~\xi <0\ ,\quad } for all x ∈ ( − ∞   ,   μ + σ   |   ξ   |     )   : {\displaystyle \quad x\in \left(-\infty \ ,\ \mu +{\tfrac {\sigma }{\ |\ \xi \ |\ }}\ \right)\ :} {\displaystyle \quad x\in \left(-\infty \ ,\ \mu +{\tfrac {\sigma }{\ |\ \xi \ |\ }}\ \right)\ :}
Let α ≡ − 1   ξ   > 0 {\displaystyle \quad \alpha \equiv -{\tfrac {1}{\ \xi \ }}>0\quad } {\displaystyle \quad \alpha \equiv -{\tfrac {1}{\ \xi \ }}>0\quad } and y ≡ 1 −   |   ξ   |   σ ( x − μ )   ; {\displaystyle \quad y\equiv 1-{\tfrac {\ |\ \xi \ |\ }{\sigma }}(x-\mu )\ ;} {\displaystyle \quad y\equiv 1-{\tfrac {\ |\ \xi \ |\ }{\sigma }}(x-\mu )\ ;}
F (   x ;   μ ,   σ ,   ξ   ) = { exp ⁡ ( − y α ) y > 0   o r   e q u i v .   x < μ + σ   |   ξ   |   1 y ≤ 0   o r   e q u i v .   x ≥ μ + σ   |   ξ   |     . {\displaystyle F(\ x;\ \mu ,\ \sigma ,\ \xi \ )={\begin{cases}\exp \left(-y^{\alpha }\right)&y>0\quad {\mathsf {~or\ equiv.~}}\quad x<\mu +{\tfrac {\sigma }{\ |\ \xi \ |\ }}\\1&y\leq 0\quad {\mathsf {~or\ equiv.~}}\quad x\geq \mu +{\tfrac {\sigma }{\ |\ \xi \ |\ }}~.\end{cases}}} {\displaystyle F(\ x;\ \mu ,\ \sigma ,\ \xi \ )={\begin{cases}\exp \left(-y^{\alpha }\right)&y>0\quad {\mathsf {~or\ equiv.~}}\quad x<\mu +{\tfrac {\sigma }{\ |\ \xi \ |\ }}\\1&y\leq 0\quad {\mathsf {~or\ equiv.~}}\quad x\geq \mu +{\tfrac {\sigma }{\ |\ \xi \ |\ }}~.\end{cases}}}

The subsections below remark on properties of these distributions.

Modification for minima rather than maxima

[edit]

The theory here relates to data maxima and the distribution being discussed is an extreme value distribution for maxima. A generalised extreme value distribution for data minima can be obtained, for example by substituting   − x {\displaystyle \ -x\;} {\displaystyle \ -x\;} for x {\displaystyle \;x\;} {\displaystyle \;x\;} in the distribution function, and subtracting the cumulative distribution from one: That is, replace   F ( x )   {\displaystyle \ F(x)\ } {\displaystyle \ F(x)\ } with   1 − F ( − x )   {\displaystyle \ 1-F(-x)\ } {\displaystyle \ 1-F(-x)\ } . Doing so yields yet another family of distributions.

Alternative convention for the Weibull distribution

[edit]

The ordinary Weibull distribution arises in reliability applications and is obtained from the distribution here by using the variable   t = μ − x   , {\displaystyle \ t=\mu -x\ ,} {\displaystyle \ t=\mu -x\ ,} which gives a strictly positive support, in contrast to the use in the formulation of extreme value theory here. This arises because the ordinary Weibull distribution is used for cases that deal with data minima rather than data maxima. The distribution here has an addition parameter compared to the usual form of the Weibull distribution and, in addition, is reversed so that the distribution has an upper bound rather than a lower bound. Importantly, in applications of the GEV, the upper bound is unknown and so must be estimated, whereas when applying the ordinary Weibull distribution in reliability applications the lower bound is usually known to be zero.

Ranges of the distributions

[edit]

Note the differences in the ranges of interest for the three extreme value distributions: Gumbel is unlimited, Fréchet has a lower limit, while the reversed Weibull has an upper limit. More precisely, univariate extreme value theory describes which of the three is the limiting law according to the initial law  X  and in particular depending on the original distribution's tail.

Distribution of log variables

[edit]

One can link the type I to types II and III in the following way: If the cumulative distribution function of some random variable   X   {\displaystyle \ X\ } {\displaystyle \ X\ } is of type II, and with the positive numbers as support, i.e.   F (   x ;   0 ,   σ ,   α   )   , {\displaystyle \ F(\ x;\ 0,\ \sigma ,\ \alpha \ )\ ,} {\displaystyle \ F(\ x;\ 0,\ \sigma ,\ \alpha \ )\ ,} then the cumulative distribution function of ln ⁡ X {\displaystyle \ln X} {\displaystyle \ln X} is of type I, namely   F (   x ;   ln ⁡ σ ,   1   α   ,   0   )   . {\displaystyle \ F(\ x;\ \ln \sigma ,\ {\tfrac {1}{\ \alpha \ }},\ 0\ )~.} {\displaystyle \ F(\ x;\ \ln \sigma ,\ {\tfrac {1}{\ \alpha \ }},\ 0\ )~.} Similarly, if the cumulative distribution function of   X   {\displaystyle \ X\ } {\displaystyle \ X\ } is of type III, and with the negative numbers as support, i.e.   F (   x ;   0 ,   σ ,   − α   )   , {\displaystyle \ F(\ x;\ 0,\ \sigma ,\ -\alpha \ )\ ,} {\displaystyle \ F(\ x;\ 0,\ \sigma ,\ -\alpha \ )\ ,} then the cumulative distribution function of   ln ⁡ ( − X )   {\displaystyle \ \ln(-X)\ } {\displaystyle \ \ln(-X)\ } is of type I, namely   F (   x ;   − ln ⁡ σ ,     1   α ,   0   )   . {\displaystyle \ F(\ x;\ -\ln \sigma ,\ {\tfrac {\ 1\ }{\alpha }},\ 0\ )~.} {\displaystyle \ F(\ x;\ -\ln \sigma ,\ {\tfrac {\ 1\ }{\alpha }},\ 0\ )~.}


Link to logit models (logistic regression)

[edit]

Multinomial logit models, and certain other types of logistic regression, can be phrased as latent variable models with error variables distributed as Gumbel distributions (type I generalized extreme value distributions). This phrasing is common in the theory of discrete choice models, which include logit models, probit models, and various extensions of them, and derives from the fact that the difference of two type-I GEV-distributed variables follows a logistic distribution, of which the logit function is the quantile function. The type-I GEV distribution thus plays the same role in these logit models as the normal distribution does in the corresponding probit models.

Properties

[edit]

The cumulative distribution function of the generalized extreme value distribution solves the stability postulate equation.[citation needed] The generalized extreme value distribution is a special case of a max-stable distribution, and is a transformation of a min-stable distribution.

Applications

[edit]
  • The GEV distribution is widely used in the treatment of "tail risks" in fields ranging from insurance to finance. In the latter case, it has been considered as a means of assessing various financial risks via metrics such as value at risk.[7][8]
Fitted GEV probability distribution to monthly maximum one-day rainfalls in October, Surinam
  • However, the resulting shape parameters have been found to lie in the range leading to undefined means and variances, which underlines the fact that reliable data analysis is often impossible.[9][full citation needed]
  • In hydrology the GEV distribution is applied to extreme events such as annual maximum one-day rainfalls and river discharges.[10] The blue picture illustrates an example of fitting the GEV distribution to ranked annually maximum one-day rainfalls showing also the 90% confidence belt based on the binomial distribution. The rainfall data are represented by plotting positions as part of the cumulative frequency analysis.

Prediction

[edit]
  • It is often of interest to predict probabilities of out-of-sample data under the assumption that both the training data and the out-of-sample data follow a GEV distribution.
  • Predictions of probabilities generated by substituting maximum likelihood estimates of the GEV parameters into the cumulative distribution function ignore parameter uncertainty. As a result, the probabilities are not well calibrated, do not reflect the frequencies of out-of-sample events, and, in particular, underestimate the probabilities of out-of-sample tail events.[11]
  • Predictions generated using the objective Bayesian approach of calibrating prior prediction have been shown to greatly reduce this underestimation, although not completely eliminate it.[11] Calibrating prior prediction is implemented in the R software package fitdistcp.[12]

Example for Normally distributed variables

[edit]

Let   {   X i   |   1 ≤ i ≤ n   }   {\displaystyle \ \left\{\ X_{i}\ {\big |}\ 1\leq i\leq n\ \right\}\ } {\displaystyle \ \left\{\ X_{i}\ {\big |}\ 1\leq i\leq n\ \right\}\ } be i.i.d. normally distributed random variables with mean 0 and variance 1. The Fisher–Tippett–Gnedenko theorem[13] tells us that   max {   X i   |   1 ≤ i ≤ n   } ∼ G E V ( μ n , σ n , 0 )   , {\displaystyle \ \max\{\ X_{i}\ {\big |}\ 1\leq i\leq n\ \}\sim GEV(\mu _{n},\sigma _{n},0)\ ,} {\displaystyle \ \max\{\ X_{i}\ {\big |}\ 1\leq i\leq n\ \}\sim GEV(\mu _{n},\sigma _{n},0)\ ,} where

μ n = Φ − 1 ( 1 −   1   n ) σ n = Φ − 1 ( 1 − 1   n   e   ) − Φ − 1 ( 1 −   1   n )   . {\displaystyle {\begin{aligned}\mu _{n}&=\Phi ^{-1}\left(1-{\frac {\ 1\ }{n}}\right)\\\sigma _{n}&=\Phi ^{-1}\left(1-{\frac {1}{\ n\ \mathrm {e} \ }}\right)-\Phi ^{-1}\left(1-{\frac {\ 1\ }{n}}\right)~.\end{aligned}}} {\displaystyle {\begin{aligned}\mu _{n}&=\Phi ^{-1}\left(1-{\frac {\ 1\ }{n}}\right)\\\sigma _{n}&=\Phi ^{-1}\left(1-{\frac {1}{\ n\ \mathrm {e} \ }}\right)-\Phi ^{-1}\left(1-{\frac {\ 1\ }{n}}\right)~.\end{aligned}}}

This allow us to estimate e.g. the mean of   max {   X i   |   1 ≤ i ≤ n   }   {\displaystyle \ \max\{\ X_{i}\ {\big |}\ 1\leq i\leq n\ \}\ } {\displaystyle \ \max\{\ X_{i}\ {\big |}\ 1\leq i\leq n\ \}\ } from the mean of the GEV distribution:

E ⁡ {   max {   X i   |   1 ≤ i ≤ n   }   } ≈ μ n + γ E   σ n = ( 1 − γ E )   Φ − 1 ( 1 −   1   n ) + γ E   Φ − 1 ( 1 − 1   e   n   ) = log ⁡ ( n 2   2 π   log ⁡ ( n 2 2 π )   )     ⋅   ( 1 + γ   log ⁡ n   + o ( 1   log ⁡ n   ) )   , {\displaystyle {\begin{aligned}\operatorname {\mathbb {E} } \left\{\ \max \left\{\ X_{i}\ {\big |}\ 1\leq i\leq n\ \right\}\ \right\}&\approx \mu _{n}+\gamma _{\mathsf {E}}\ \sigma _{n}\\&=(1-\gamma _{\mathsf {E}})\ \Phi ^{-1}\left(1-{\frac {\ 1\ }{n}}\right)+\gamma _{\mathsf {E}}\ \Phi ^{-1}\left(1-{\frac {1}{\ e\ n\ }}\right)\\&={\sqrt {\log \left({\frac {n^{2}}{\ 2\pi \ \log \left({\frac {n^{2}}{2\pi }}\right)\ }}\right)~}}\ \cdot \ \left(1+{\frac {\gamma }{\ \log n\ }}+{\mathcal {o}}\left({\frac {1}{\ \log n\ }}\right)\right)\ ,\end{aligned}}} {\displaystyle {\begin{aligned}\operatorname {\mathbb {E} } \left\{\ \max \left\{\ X_{i}\ {\big |}\ 1\leq i\leq n\ \right\}\ \right\}&\approx \mu _{n}+\gamma _{\mathsf {E}}\ \sigma _{n}\\&=(1-\gamma _{\mathsf {E}})\ \Phi ^{-1}\left(1-{\frac {\ 1\ }{n}}\right)+\gamma _{\mathsf {E}}\ \Phi ^{-1}\left(1-{\frac {1}{\ e\ n\ }}\right)\\&={\sqrt {\log \left({\frac {n^{2}}{\ 2\pi \ \log \left({\frac {n^{2}}{2\pi }}\right)\ }}\right)~}}\ \cdot \ \left(1+{\frac {\gamma }{\ \log n\ }}+{\mathcal {o}}\left({\frac {1}{\ \log n\ }}\right)\right)\ ,\end{aligned}}}

where   γ E   {\displaystyle \ \gamma _{\mathsf {E}}\ } {\displaystyle \ \gamma _{\mathsf {E}}\ } is the Euler–Mascheroni constant.

Related distributions

[edit]
  1. If   X ∼ GEV ( μ , σ , ξ )   {\displaystyle \ X\sim {\textrm {GEV}}(\mu ,\,\sigma ,\,\xi )\ } {\displaystyle \ X\sim {\textrm {GEV}}(\mu ,\,\sigma ,\,\xi )\ } then   m X + b ∼ GEV ( m μ + b ,   | m | σ ,   ξ )   {\displaystyle \ mX+b\sim {\textrm {GEV}}(m\mu +b,\ |m|\sigma ,\ \xi )\ } {\displaystyle \ mX+b\sim {\textrm {GEV}}(m\mu +b,\ |m|\sigma ,\ \xi )\ }
  2. If   X ∼ Gumbel ( μ ,   σ )   {\displaystyle \ X\sim {\textrm {Gumbel}}(\mu ,\ \sigma )\ } {\displaystyle \ X\sim {\textrm {Gumbel}}(\mu ,\ \sigma )\ } (Gumbel distribution) then   X ∼ GEV ( μ , σ , 0 )   {\displaystyle \ X\sim {\textrm {GEV}}(\mu ,\,\sigma ,\,0)\ } {\displaystyle \ X\sim {\textrm {GEV}}(\mu ,\,\sigma ,\,0)\ }
  3. If   X ∼ Weibull ( σ , μ )   {\displaystyle \ X\sim {\textrm {Weibull}}(\sigma ,\,\mu )\ } {\displaystyle \ X\sim {\textrm {Weibull}}(\sigma ,\,\mu )\ } (Weibull distribution) then   μ ( 1 − σ log ⁡ X σ ) ∼ GEV ( μ , σ , 0 )   {\displaystyle \ \mu \left(1-\sigma \log {\tfrac {X}{\sigma }}\right)\sim {\textrm {GEV}}(\mu ,\,\sigma ,\,0)\ } {\displaystyle \ \mu \left(1-\sigma \log {\tfrac {X}{\sigma }}\right)\sim {\textrm {GEV}}(\mu ,\,\sigma ,\,0)\ }
  4. If   X ∼ GEV ( μ , σ , 0 )   {\displaystyle \ X\sim {\textrm {GEV}}(\mu ,\,\sigma ,\,0)\ } {\displaystyle \ X\sim {\textrm {GEV}}(\mu ,\,\sigma ,\,0)\ } then   σ exp ⁡ ( − X − μ μ σ ) ∼ Weibull ( σ , μ )   {\displaystyle \ \sigma \exp(-{\tfrac {X-\mu }{\mu \sigma }})\sim {\textrm {Weibull}}(\sigma ,\,\mu )\ } {\displaystyle \ \sigma \exp(-{\tfrac {X-\mu }{\mu \sigma }})\sim {\textrm {Weibull}}(\sigma ,\,\mu )\ } (Weibull distribution)
  5. If   X ∼ Exponential ( 1 )   {\displaystyle \ X\sim {\textrm {Exponential}}(1)\ } {\displaystyle \ X\sim {\textrm {Exponential}}(1)\ } (Exponential distribution) then   μ − σ log ⁡ X ∼ GEV ( μ , σ , 0 )   {\displaystyle \ \mu -\sigma \log X\sim {\textrm {GEV}}(\mu ,\,\sigma ,\,0)\ } {\displaystyle \ \mu -\sigma \log X\sim {\textrm {GEV}}(\mu ,\,\sigma ,\,0)\ }
  6. If   X ∼ G u m b e l ( α X , β )   {\displaystyle \ X\sim \mathrm {Gumbel} (\alpha _{X},\beta )\ } {\displaystyle \ X\sim \mathrm {Gumbel} (\alpha _{X},\beta )\ } and   Y ∼ G u m b e l ( α Y , β )   {\displaystyle \ Y\sim \mathrm {Gumbel} (\alpha _{Y},\beta )\ } {\displaystyle \ Y\sim \mathrm {Gumbel} (\alpha _{Y},\beta )\ } then   X − Y ∼ L o g i s t i c ( α X − α Y , β )   {\displaystyle \ X-Y\sim \mathrm {Logistic} (\alpha _{X}-\alpha _{Y},\beta )\ } {\displaystyle \ X-Y\sim \mathrm {Logistic} (\alpha _{X}-\alpha _{Y},\beta )\ } (see Logistic distribution).
  7. If   X   {\displaystyle \ X\ } {\displaystyle \ X\ } and   Y ∼ G u m b e l ( α , β )   {\displaystyle \ Y\sim \mathrm {Gumbel} (\alpha ,\beta )\ } {\displaystyle \ Y\sim \mathrm {Gumbel} (\alpha ,\beta )\ } then   X + Y ≁ L o g i s t i c ( 2 α , β )   {\displaystyle \ X+Y\nsim \mathrm {Logistic} (2\alpha ,\beta )\ } {\displaystyle \ X+Y\nsim \mathrm {Logistic} (2\alpha ,\beta )\ } (The sum is not a logistic distribution).
Note that   E ⁡ {   X + Y   } = 2 α + 2 β γ ≠ 2 α = E ⁡ {   Logistic ⁡ ( 2 α , β )   }   . {\displaystyle \ \operatorname {\mathbb {E} } \{\ X+Y\ \}=2\alpha +2\beta \gamma \neq 2\alpha =\operatorname {\mathbb {E} } \left\{\ \operatorname {Logistic} (2\alpha ,\beta )\ \right\}~.} {\displaystyle \ \operatorname {\mathbb {E} } \{\ X+Y\ \}=2\alpha +2\beta \gamma \neq 2\alpha =\operatorname {\mathbb {E} } \left\{\ \operatorname {Logistic} (2\alpha ,\beta )\ \right\}~.}

Proofs

[edit]

4. Let   X ∼ Weibull ( σ , μ )   , {\displaystyle \ X\sim {\textrm {Weibull}}(\sigma ,\,\mu )\ ,} {\displaystyle \ X\sim {\textrm {Weibull}}(\sigma ,\,\mu )\ ,} then the cumulative distribution of   g ( x ) = μ ( 1 − σ log ⁡ X σ )   {\displaystyle \ g(x)=\mu \left(1-\sigma \log {\frac {X}{\sigma }}\right)\ } {\displaystyle \ g(x)=\mu \left(1-\sigma \log {\frac {X}{\sigma }}\right)\ } is:

P ⁡ {   μ ( 1 − σ log ⁡   X   σ ) < x   } = P ⁡ {   log ⁡ X σ > 1 − x / μ σ   }   S i n c e   t h e   l o g a r i t h m   i s   a l w a y s   i n c r e a s i n g :   = P ⁡ {   X > σ exp ⁡ [ 1 − x / μ σ ]   } = exp ⁡ ( − ( σ exp ⁡ [ 1 − x / μ σ ] ⋅ 1 σ ) μ ) = exp ⁡ ( − ( exp ⁡ [ 1 μ − x / μ σ ] ) μ ) = exp ⁡ ( − exp ⁡ [ μ − x σ ] ) = exp ⁡ ( − exp ⁡ [ − s ] ) , s = x − μ σ   , {\displaystyle {\begin{aligned}\operatorname {\mathbb {P} } \left\{\ \mu \left(1-\sigma \log {\frac {\ X\ }{\sigma }}\right)<x\ \right\}&=\operatorname {\mathbb {P} } \left\{\ \log {\frac {X}{\sigma }}>{\frac {1-x/\mu }{\sigma }}\ \right\}\\{}\\&{\mathsf {\ Since\ the\ logarithm\ is\ always\ increasing:\ }}\\{}\\&=\operatorname {\mathbb {P} } \left\{\ X>\sigma \exp \left[{\frac {1-x/\mu }{\sigma }}\right]\ \right\}\\&=\exp \left(-\left({\cancel {\sigma }}\exp \left[{\frac {1-x/\mu }{\sigma }}\right]\cdot {\cancel {\frac {1}{\sigma }}}\right)^{\mu }\right)\\&=\exp \left(-\left(\exp \left[{\frac {{\cancelto {\mu }{1}}-x/{\cancel {\mu }}}{\sigma }}\right]\right)^{\cancel {\mu }}\right)\\&=\exp \left(-\exp \left[{\frac {\mu -x}{\sigma }}\right]\right)\\&=\exp \left(-\exp \left[-s\right]\right),\quad s={\frac {x-\mu }{\sigma }}\ ,\end{aligned}}} {\displaystyle {\begin{aligned}\operatorname {\mathbb {P} } \left\{\ \mu \left(1-\sigma \log {\frac {\ X\ }{\sigma }}\right)<x\ \right\}&=\operatorname {\mathbb {P} } \left\{\ \log {\frac {X}{\sigma }}>{\frac {1-x/\mu }{\sigma }}\ \right\}\\{}\\&{\mathsf {\ Since\ the\ logarithm\ is\ always\ increasing:\ }}\\{}\\&=\operatorname {\mathbb {P} } \left\{\ X>\sigma \exp \left[{\frac {1-x/\mu }{\sigma }}\right]\ \right\}\\&=\exp \left(-\left({\cancel {\sigma }}\exp \left[{\frac {1-x/\mu }{\sigma }}\right]\cdot {\cancel {\frac {1}{\sigma }}}\right)^{\mu }\right)\\&=\exp \left(-\left(\exp \left[{\frac {{\cancelto {\mu }{1}}-x/{\cancel {\mu }}}{\sigma }}\right]\right)^{\cancel {\mu }}\right)\\&=\exp \left(-\exp \left[{\frac {\mu -x}{\sigma }}\right]\right)\\&=\exp \left(-\exp \left[-s\right]\right),\quad s={\frac {x-\mu }{\sigma }}\ ,\end{aligned}}}
which is the cdf for ∼ GEV ( μ , σ , 0 )   . {\displaystyle \sim {\textrm {GEV}}(\mu ,\,\sigma ,\,0)~.} {\displaystyle \sim {\textrm {GEV}}(\mu ,\,\sigma ,\,0)~.}

5. Let   X ∼ Exponential ( 1 )   , {\displaystyle \ X\sim {\textrm {Exponential}}(1)\ ,} {\displaystyle \ X\sim {\textrm {Exponential}}(1)\ ,} then the cumulative distribution of   g ( X ) = μ − σ log ⁡ X   {\displaystyle \ g(X)=\mu -\sigma \log X\ } {\displaystyle \ g(X)=\mu -\sigma \log X\ } is:

P ⁡ {   μ − σ log ⁡ X < x   } = P ⁡ {   log ⁡ X > μ − x σ   }   S i n c e   t h e   l o g a r i t h m   i s   a l w a y s   i n c r e a s i n g :   = P ⁡ {   X > exp ⁡ (   μ − x   σ )   } = exp ⁡ [ − exp ⁡ (   μ − x   σ ) ] = exp ⁡ [ − exp ⁡ ( − s ) ]   ,   w h e r e   s ≡ x − μ σ   ; {\displaystyle {\begin{aligned}\operatorname {\mathbb {P} } \left\{\ \mu -\sigma \log X<x\ \right\}&=\operatorname {\mathbb {P} } \left\{\ \log X>{\frac {\mu -x}{\sigma }}\ \right\}\\{}\\&{\mathsf {\ Since\ the\ logarithm\ is\ always\ increasing:\ }}\\{}\\&=\operatorname {\mathbb {P} } \left\{\ X>\exp \left({\frac {\ \mu -x\ }{\sigma }}\right)\ \right\}\\&=\exp \left[-\exp \left({\frac {\ \mu -x\ }{\sigma }}\right)\right]\\&=\exp \left[-\exp(-s)\right]\ ,\quad ~{\mathsf {where}}~\quad s\equiv {\frac {x-\mu }{\sigma }}\ ;\end{aligned}}} {\displaystyle {\begin{aligned}\operatorname {\mathbb {P} } \left\{\ \mu -\sigma \log X<x\ \right\}&=\operatorname {\mathbb {P} } \left\{\ \log X>{\frac {\mu -x}{\sigma }}\ \right\}\\{}\\&{\mathsf {\ Since\ the\ logarithm\ is\ always\ increasing:\ }}\\{}\\&=\operatorname {\mathbb {P} } \left\{\ X>\exp \left({\frac {\ \mu -x\ }{\sigma }}\right)\ \right\}\\&=\exp \left[-\exp \left({\frac {\ \mu -x\ }{\sigma }}\right)\right]\\&=\exp \left[-\exp(-s)\right]\ ,\quad ~{\mathsf {where}}~\quad s\equiv {\frac {x-\mu }{\sigma }}\ ;\end{aligned}}}
which is the cumulative distribution of   GEV ⁡ ( μ , σ , 0 )   . {\displaystyle \ \operatorname {GEV} (\mu ,\sigma ,0)~.} {\displaystyle \ \operatorname {GEV} (\mu ,\sigma ,0)~.}

See also

[edit]
  • Extreme value theory (univariate theory)
  • Fisher–Tippett–Gnedenko theorem
  • Generalized Pareto distribution
  • German tank problem, opposite question of population maximum given sample maximum
  • Pickands–Balkema–De Haan theorem

References

[edit]
  1. ^ a b Muraleedharan, G.; Guedes Soares, C.; Lucas, Cláudia (2011). "Characteristic and moment generating functions of generalised extreme value distribution (GEV)". In Wright, Linda L. (ed.). Sea Level Rise, Coastal Engineering, Shorelines, and Tides. Nova Science Publishers. Chapter 14, pp. 269–276. ISBN 978-1-61728-655-1.
  2. ^ Weisstein, Eric W. "Extreme value distribution". mathworld.wolfram.com. Retrieved 2021-08-06.
  3. ^ a b Haan, Laurens; Ferreira, Ana (2007). Extreme Value Theory: An introduction. Springer.
  4. ^ Jenkinson, Arthur F. (1955). "The frequency distribution of the annual maximum (or minimum) values of meteorological elements". Quarterly Journal of the Royal Meteorological Society. 81 (348): 158–171. Bibcode:1955QJRMS..81..158J. doi:10.1002/qj.49708134804.
  5. ^ von Mises, R. (1936). "La distribution de la plus grande de n valeurs". Rev. Math. Union Interbalcanique. 1: 141–160.
  6. ^ Norton, Matthew; Khokhlov, Valentyn; Uryasev, Stan (2019). "Calculating CVaR and bPOE for common probability distributions with application to portfolio optimization and density estimation" (PDF). Annals of Operations Research. 299 (1–2). Springer: 1281–1315. arXiv:1811.11301. doi:10.1007/s10479-019-03373-1. S2CID 254231768. Archived from the original (PDF) on 2023-03-31. Retrieved 2023-02-27.
  7. ^ Moscadelli, Marco (30 July 2004). The modelling of operational risk: Experience with the analysis of the data collected by the Basel Committee (PDF) (non-peer reviewed article). doi:10.2139/ssrn.557214. SSRN 557214. Archived from the original (PDF) on 22 September 2015. Retrieved 17 June 2015 – via Archivos curso Riesgo Operativo de N.D. Girald (unalmed.edu.co).
  8. ^ Guégan, D.; Hassani, B.K. (2014). "A mathematical resurgence of risk management: An extreme modeling of expert opinions". Frontiers in Finance and Economics. 11 (1): 25–45. SSRN 2558747.
  9. ^ Aas, Kjersti (23 January 2008). "[no title cited]". citeseerx.ist.psu.edu (lecture). Trondheim, NO: Norges teknisk-naturvitenskapelige universitet. CiteSeerX 10.1.1.523.6456. Archived from the original (PDF) on 17 April 2023. Retrieved 4 December 2019.
  10. ^ Liu, Xin; Wang, Yu (September 2022). "Quantifying annual occurrence probability of rainfall-induced landslide at a specific slope". Computers and Geotechnics. 149 104877. Bibcode:2022CGeot.14904877L. doi:10.1016/j.compgeo.2022.104877. S2CID 250232752.
  11. ^ a b Jewson, Stephen; Sweeting, Trevor; Jewson, Lynne (2025-02-20). "Reducing reliability bias in assessments of extreme weather risk using calibrating priors". Advances in Statistical Climatology, Meteorology and Oceanography. 11 (1): 1–22. Bibcode:2025ASCMO..11....1J. doi:10.5194/ascmo-11-1-2025. ISSN 2364-3579.
  12. ^ Jewson, Stephen (2025-04-23). fitdistcp: Distribution Fitting with Calibrating Priors for Commonly Used Distributions (Report). Comprehensive R Archive Network. doi:10.32614/cran.package.fitdistcp.
  13. ^ David, Herbert A.; Nagaraja, Haikady N. (2004). Order Statistics. John Wiley & Sons. p. 299.

Further reading

[edit]
  • Embrechts, Paul; Klüppelberg, Claudia; Mikosch, Thomas (1997). Modelling Extremal Events for Insurance and Finance. Berlin, DE: Springer Verlag. ISBN 9783540609315 – via Google books.
  • Leadbetter, M.R.; Lindgren, G.; Rootzén, H. (1983). Extremes and Related Properties of Random Sequences and Processes. Springer-Verlag. ISBN 0-387-90731-9.
  • Resnick, S.I. (1987). Extreme Values, Regular Variation, and Point Processes. Springer-Verlag. ISBN 0-387-96481-9.
  • Coles, Stuart (2001). An Introduction to Statistical Modeling of Extreme Values. Springer-Verlag. ISBN 1-85233-459-2.
  • v
  • t
  • e
Probability distributions (list)
Discrete
univariate
with finite
support
  • Benford
  • Bernoulli
  • Beta-binomial
  • Binomial
  • Categorical
  • Hypergeometric
    • Negative
  • Poisson binomial
  • Rademacher
  • Soliton
  • Discrete uniform
  • Zipf
  • Zipf–Mandelbrot
with infinite
support
  • Beta negative binomial
  • Borel
  • Conway–Maxwell–Poisson
  • Discrete phase-type
  • Delaporte
  • Extended negative binomial
  • Flory–Schulz
  • Gauss–Kuzmin
  • Geometric
  • Logarithmic
  • Mixed Poisson
  • Negative binomial
  • Panjer
  • Parabolic fractal
  • Poisson
  • Skellam
  • Yule–Simon
  • Zeta
Continuous
univariate
supported on a
bounded interval
  • Arcsine
  • ARGUS
  • Balding–Nichols
  • Bates
  • Beta
    • Generalized
  • Beta rectangular
  • Continuous Bernoulli
  • Irwin–Hall
  • Kumaraswamy
  • Logit-normal
  • Noncentral beta
  • PERT
  • Power function
  • Raised cosine
  • Reciprocal
  • Triangular
  • U-quadratic
  • Uniform
  • Wigner semicircle
supported on a
semi-infinite
interval
  • Benini
  • Benktander 1st kind
  • Benktander 2nd kind
  • Beta prime
  • Burr
  • Chi
  • Chi-squared
    • Noncentral
    • Inverse
      • Scaled
  • Dagum
  • Davis
  • Erlang
    • Hyper
  • Exponential
    • Hyperexponential
    • Hypoexponential
    • Logarithmic
  • F
    • Noncentral
  • Folded normal
  • Fréchet
  • Gamma
    • Generalized
    • Inverse
  • gamma/Gompertz
  • Gompertz
    • Shifted
  • Half-logistic
  • Half-normal
  • Hotelling's T-squared
  • Hartman–Watson
  • Inverse Gaussian
    • Generalized
  • Kolmogorov
  • Lévy
  • Log-Cauchy
  • Log-Laplace
  • Log-logistic
  • Log-normal
  • Log-t
  • Lomax
  • Matrix-exponential
  • Maxwell–Boltzmann
  • Maxwell–Jüttner
  • Mittag-Leffler
  • Nakagami
  • Pareto
  • Phase-type
  • Poly-Weibull
  • Rayleigh
  • Relativistic Breit–Wigner
  • Rice
  • Truncated normal
  • type-2 Gumbel
  • Weibull
    • Discrete
  • Wilks's lambda
supported
on the whole
real line
  • Cauchy
  • Exponential power
  • Fisher's z
  • Kaniadakis κ-Gaussian
  • Gaussian q
  • Generalized hyperbolic
  • Generalized logistic (logistic-beta)
  • Generalized normal
  • Geometric stable
  • Gumbel
  • Holtsmark
  • Hyperbolic secant
  • Johnson's SU
  • Landau
  • Laplace
    • Asymmetric
  • Logistic
  • Noncentral t
  • Normal (Gaussian)
  • Normal-inverse Gaussian
  • Skew normal
  • Slash
  • Stable
  • Student's t
  • Tracy–Widom
  • Variance-gamma
  • Voigt
with support
whose type varies
  • Generalized chi-squared
  • Generalized extreme value
  • Generalized Pareto
  • Marchenko–Pastur
  • Kaniadakis κ-exponential
  • Kaniadakis κ-Gamma
  • Kaniadakis κ-Weibull
  • Kaniadakis κ-Logistic
  • Kaniadakis κ-Erlang
  • q-exponential
  • q-Gaussian
  • q-Weibull
  • Shifted log-logistic
  • Tukey lambda
Mixed
univariate
continuous-
discrete
  • Rectified Gaussian
Multivariate
(joint)
  • Discrete:
  • Ewens
  • Multinomial
    • Dirichlet
    • Negative
  • Continuous:
  • Dirichlet
    • Generalized
  • Multivariate Laplace
  • Multivariate normal
  • Multivariate stable
  • Multivariate t
  • Normal-gamma
    • Inverse
  • Matrix-valued:
  • LKJ
  • Matrix beta
  • Matrix F
  • Matrix normal
  • Matrix t
  • Matrix gamma
    • Inverse
  • Wishart
    • Normal
    • Inverse
    • Normal-inverse
    • Complex
  • Uniform distribution on a Stiefel manifold
Directional
Univariate (circular) directional
Circular uniform
Univariate von Mises
Wrapped normal
Wrapped Cauchy
Wrapped exponential
Wrapped asymmetric Laplace
Wrapped Lévy
Bivariate (spherical)
Kent
Bivariate (toroidal)
Bivariate von Mises
Multivariate
von Mises–Fisher
Bingham
Degenerate
and singular
Degenerate
Dirac delta function
Singular
Cantor
Families
  • Circular
  • Compound Poisson
  • Elliptical
  • Exponential
  • Natural exponential
  • Location–scale
  • Maximum entropy
  • Mixture
  • Pearson
  • Tweedie
  • Wrapped
  • Category
  • Commons
Retrieved from "https://teknopedia.ac.id/w/index.php?title=Generalized_extreme_value_distribution&oldid=1340162586"
Categories:
  • Continuous distributions
  • Extreme value data
  • Location-scale family probability distributions
  • Stability (probability)
Hidden categories:
  • Articles with short description
  • Short description is different from Wikidata
  • Articles needing additional references from May 2020
  • All articles needing additional references
  • All articles with unsourced statements
  • Articles with unsourced statements from May 2011
  • All articles with incomplete citations
  • Articles with incomplete citations from October 2024

  • indonesia
  • Polski
  • العربية
  • Deutsch
  • English
  • Español
  • Français
  • Italiano
  • مصرى
  • Nederlands
  • 日本語
  • Português
  • Sinugboanong Binisaya
  • Svenska
  • Українська
  • Tiếng Việt
  • Winaray
  • 中文
  • Русский
Sunting pranala
url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url
Pusat Layanan

UNIVERSITAS TEKNOKRAT INDONESIA | ASEAN's Best Private University
Jl. ZA. Pagar Alam No.9 -11, Labuhan Ratu, Kec. Kedaton, Kota Bandar Lampung, Lampung 35132
Phone: (0721) 702022
Email: pmb@teknokrat.ac.id