Epstein Files Full PDF

CLICK HERE
Technopedia Center
PMB University Brochure
Faculty of Engineering and Computer Science
S1 Informatics S1 Information Systems S1 Information Technology S1 Computer Engineering S1 Electrical Engineering S1 Civil Engineering

faculty of Economics and Business
S1 Management S1 Accountancy

Faculty of Letters and Educational Sciences
S1 English literature S1 English language education S1 Mathematics education S1 Sports Education
teknopedia

  • Registerasi
  • Brosur UTI
  • Kip Scholarship Information
  • Performance
Flag Counter
  1. World Encyclopedia
  2. Vandermonde matrix - Wikipedia
Vandermonde matrix - Wikipedia
From Wikipedia, the free encyclopedia
Matrix of geometric progressions

In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row: an ( m + 1 ) × ( n + 1 ) {\displaystyle (m+1)\times (n+1)} {\displaystyle (m+1)\times (n+1)} matrix

V = V ( x 0 , x 1 , ⋯ , x m ) = ( 1 x 0 x 0 2 … x 0 n 1 x 1 x 1 2 … x 1 n 1 x 2 x 2 2 … x 2 n ⋮ ⋮ ⋮ ⋱ ⋮ 1 x m x m 2 … x m n ) {\displaystyle V=V(x_{0},x_{1},\cdots ,x_{m})={\begin{pmatrix}1&x_{0}&x_{0}^{2}&\dots &x_{0}^{n}\\1&x_{1}&x_{1}^{2}&\dots &x_{1}^{n}\\1&x_{2}&x_{2}^{2}&\dots &x_{2}^{n}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&x_{m}&x_{m}^{2}&\dots &x_{m}^{n}\end{pmatrix}}} {\displaystyle V=V(x_{0},x_{1},\cdots ,x_{m})={\begin{pmatrix}1&x_{0}&x_{0}^{2}&\dots &x_{0}^{n}\\1&x_{1}&x_{1}^{2}&\dots &x_{1}^{n}\\1&x_{2}&x_{2}^{2}&\dots &x_{2}^{n}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&x_{m}&x_{m}^{2}&\dots &x_{m}^{n}\end{pmatrix}}}

with entries V i , j = x i j {\displaystyle V_{i,j}=x_{i}^{j}} {\displaystyle V_{i,j}=x_{i}^{j}}, the jth power of the number x i {\displaystyle x_{i}} {\displaystyle x_{i}}, for all zero-based indices i {\displaystyle i} {\displaystyle i} and j {\displaystyle j} {\displaystyle j}.[1] Some authors define the Vandermonde matrix as the transpose of the above matrix.[2][3]

The determinant of a square Vandermonde matrix (when n = m {\displaystyle n=m} {\displaystyle n=m}) is called a Vandermonde determinant or Vandermonde polynomial. Its value is:

det ( V ) = ∏ 0 ≤ i < j ≤ m ( x j − x i ) . {\displaystyle \det(V)=\prod _{0\leq i<j\leq m}(x_{j}-x_{i}).} {\displaystyle \det(V)=\prod _{0\leq i<j\leq m}(x_{j}-x_{i}).}

This is non-zero if and only if all x i {\displaystyle x_{i}} {\displaystyle x_{i}} are distinct (no two are equal), making the Vandermonde matrix invertible.

Applications

[edit]

The polynomial interpolation problem is to find a polynomial p ( x ) = a 0 + a 1 x + a 2 x 2 + ⋯ + a n x n {\displaystyle p(x)=a_{0}+a_{1}x+a_{2}x^{2}+\dots +a_{n}x^{n}} {\displaystyle p(x)=a_{0}+a_{1}x+a_{2}x^{2}+\dots +a_{n}x^{n}} which satisfies p ( x 0 ) = y 0 , … , p ( x m ) = y m {\displaystyle p(x_{0})=y_{0},\ldots ,p(x_{m})=y_{m}} {\displaystyle p(x_{0})=y_{0},\ldots ,p(x_{m})=y_{m}} for given data points ( x 0 , y 0 ) , … , ( x m , y m ) {\displaystyle (x_{0},y_{0}),\ldots ,(x_{m},y_{m})} {\displaystyle (x_{0},y_{0}),\ldots ,(x_{m},y_{m})}. This problem can be reformulated in terms of linear algebra by means of the Vandermonde matrix, as follows. V {\displaystyle V} {\displaystyle V} computes the values of p ( x ) {\displaystyle p(x)} {\displaystyle p(x)} at the points x = x 0 ,   x 1 , … ,   x m {\displaystyle x=x_{0},\ x_{1},\dots ,\ x_{m}} {\displaystyle x=x_{0},\ x_{1},\dots ,\ x_{m}} via a matrix multiplication V a = y {\displaystyle Va=y} {\displaystyle Va=y}, where a = ( a 0 , … , a n ) {\displaystyle a=(a_{0},\ldots ,a_{n})} {\displaystyle a=(a_{0},\ldots ,a_{n})} is the vector of coefficients and y = ( y 0 , … , y m ) = ( p ( x 0 ) , … , p ( x m ) ) {\displaystyle y=(y_{0},\ldots ,y_{m})=(p(x_{0}),\ldots ,p(x_{m}))} {\displaystyle y=(y_{0},\ldots ,y_{m})=(p(x_{0}),\ldots ,p(x_{m}))} is the vector of values (both written as column vectors):

( 1 x 0 x 0 2 … x 0 n 1 x 1 x 1 2 … x 1 n 1 x 2 x 2 2 … x 2 n ⋮ ⋮ ⋮ ⋱ ⋮ 1 x m x m 2 … x m n ) ⋅ ( a 0 a 1 ⋮ a n ) = ( p ( x 0 ) p ( x 1 ) ⋮ p ( x m ) ) . {\displaystyle {\begin{pmatrix}1&x_{0}&x_{0}^{2}&\dots &x_{0}^{n}\\1&x_{1}&x_{1}^{2}&\dots &x_{1}^{n}\\1&x_{2}&x_{2}^{2}&\dots &x_{2}^{n}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&x_{m}&x_{m}^{2}&\dots &x_{m}^{n}\end{pmatrix}}\cdot {\begin{pmatrix}a_{0}\\a_{1}\\\vdots \\a_{n}\end{pmatrix}}={\begin{pmatrix}p(x_{0})\\p(x_{1})\\\vdots \\p(x_{m})\end{pmatrix}}.} {\displaystyle {\begin{pmatrix}1&x_{0}&x_{0}^{2}&\dots &x_{0}^{n}\\1&x_{1}&x_{1}^{2}&\dots &x_{1}^{n}\\1&x_{2}&x_{2}^{2}&\dots &x_{2}^{n}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&x_{m}&x_{m}^{2}&\dots &x_{m}^{n}\end{pmatrix}}\cdot {\begin{pmatrix}a_{0}\\a_{1}\\\vdots \\a_{n}\end{pmatrix}}={\begin{pmatrix}p(x_{0})\\p(x_{1})\\\vdots \\p(x_{m})\end{pmatrix}}.}If n = m {\displaystyle n=m} {\displaystyle n=m} and x 0 , … ,   x n {\displaystyle x_{0},\dots ,\ x_{n}} {\displaystyle x_{0},\dots ,\ x_{n}} are distinct, then V is a square matrix with non-zero determinant, i.e. an invertible matrix. Thus, given V and y, one can find the required p ( x ) {\displaystyle p(x)} {\displaystyle p(x)} by solving for its coefficients a {\displaystyle a} {\displaystyle a} in the equation V a = y {\displaystyle Va=y} {\displaystyle Va=y}:

a = V − 1 y {\displaystyle a=V^{-1}y} {\displaystyle a=V^{-1}y}.

That is, the map from coefficients to values of polynomials is a bijective linear mapping with matrix V, and the interpolation problem has a unique solution. This result is called the unisolvence theorem, and is a special case of the Chinese remainder theorem for polynomials.

In statistics, the equation V a = y {\displaystyle Va=y} {\displaystyle Va=y} means that the Vandermonde matrix is the design matrix of polynomial regression.

In numerical analysis, solving the equation V a = y {\displaystyle Va=y} {\displaystyle Va=y} naïvely by Gaussian elimination results in an algorithm with time complexity O(n3). Exploiting the structure of the Vandermonde matrix, one can use Newton's divided differences method[4][5]) to solve the equation in O(n2) time, which also gives the UL factorization of V − 1 {\displaystyle V^{-1}} {\displaystyle V^{-1}}. The resulting algorithm produces extremely accurate solutions, even if V {\displaystyle V} {\displaystyle V} is ill-conditioned.[2] (See polynomial interpolation.) Using displacement rank we obtain method requiring O ~ ( α ω − 1 n ) {\displaystyle {\tilde {O}}({\alpha ^{\omega -1}}n)} {\displaystyle {\tilde {O}}({\alpha ^{\omega -1}}n)} operations with the use of fast matrix multiplication algorithms, where α {\displaystyle \alpha } {\displaystyle \alpha } is just the rank and ω < 2.372 {\displaystyle \omega <2.372} {\displaystyle \omega <2.372}[6].

The Vandermonde determinant is used in the representation theory of the symmetric group.[7]

When the values x i {\displaystyle x_{i}} {\displaystyle x_{i}} belong to a finite field, the Vandermonde determinant is also called the Moore determinant, and has properties which are important in the theory of BCH codes and Reed–Solomon error correction codes.

The discrete Fourier transform is defined by a specific Vandermonde matrix, the DFT matrix, where the x i {\displaystyle x_{i}} {\displaystyle x_{i}} are chosen to be nth roots of unity. The Fast Fourier transform computes the product of this matrix with a vector in O ( n log 2 ⁡ n ) {\displaystyle O(n\log ^{2}n)} {\displaystyle O(n\log ^{2}n)} time.[8] See the article on Multipoint Polynomial evaluation for details.

In the physical theory of the quantum Hall effect, the Vandermonde determinant shows that the Laughlin wavefunction with filling factor 1 is equal to a Slater determinant. This is no longer true for filling factors different from 1 in the fractional quantum Hall effect.

In the geometry of polyhedra, the Vandermonde matrix gives the normalized volume of arbitrary k {\displaystyle k} {\displaystyle k}-faces of cyclic polytopes. Specifically, if F = C d ( t i 1 , … , t i k + 1 ) {\displaystyle F=C_{d}(t_{i_{1}},\dots ,t_{i_{k+1}})} {\displaystyle F=C_{d}(t_{i_{1}},\dots ,t_{i_{k+1}})} is a k {\displaystyle k} {\displaystyle k}-face of the cyclic polytope C d ( T ) ⊂ R d {\displaystyle C_{d}(T)\subset \mathbb {R} ^{d}} {\displaystyle C_{d}(T)\subset \mathbb {R} ^{d}} corresponding to T = { t 1 < ⋯ < t N } ⊂ R {\displaystyle T=\{t_{1}<\cdots <t_{N}\}\subset \mathbb {R} } {\displaystyle T=\{t_{1}<\cdots <t_{N}\}\subset \mathbb {R} }, then n v o l ( F ) = 1 k ! ∏ 1 ≤ m < n ≤ k + 1 ( t i n − t i m ) . {\displaystyle \mathrm {nvol} (F)={\frac {1}{k!}}\prod _{1\leq m<n\leq k+1}{(t_{i_{n}}-t_{i_{m}})}.} {\displaystyle \mathrm {nvol} (F)={\frac {1}{k!}}\prod _{1\leq m<n\leq k+1}{(t_{i_{n}}-t_{i_{m}})}.}

Determinant

[edit]

The determinant of a square Vandermonde matrix is called a Vandermonde polynomial or Vandermonde determinant. Its value is the polynomial

det ( V ) = ∏ 0 ≤ i < j ≤ n ( x j − x i ) {\displaystyle \det(V)=\prod _{0\leq i<j\leq n}(x_{j}-x_{i})} {\displaystyle \det(V)=\prod _{0\leq i<j\leq n}(x_{j}-x_{i})}

which is non-zero if and only if all x i {\displaystyle x_{i}} {\displaystyle x_{i}} are distinct.

The Vandermonde determinant was formerly sometimes called the discriminant, but in current terminology the discriminant of a polynomial p ( x ) = ( x − x 0 ) ⋯ ( x − x n ) {\displaystyle p(x)=(x-x_{0})\cdots (x-x_{n})} {\displaystyle p(x)=(x-x_{0})\cdots (x-x_{n})} is the square of the Vandermonde determinant of the roots x i {\displaystyle x_{i}} {\displaystyle x_{i}}. The Vandermonde determinant is an alternating form in the x i {\displaystyle x_{i}} {\displaystyle x_{i}}, meaning that exchanging two x i {\displaystyle x_{i}} {\displaystyle x_{i}} changes the sign, and det ( V ) {\displaystyle \det(V)} {\displaystyle \det(V)} thus depends on order for the x i {\displaystyle x_{i}} {\displaystyle x_{i}}. By contrast, the discriminant det ( V ) 2 {\displaystyle \det(V)^{2}} {\displaystyle \det(V)^{2}} does not depend on any order, so that Galois theory implies that the discriminant is a polynomial function of the coefficients of p ( x ) {\displaystyle p(x)} {\displaystyle p(x)}.

The determinant formula is proved below in three ways. The first uses polynomial properties, especially the unique factorization property of multivariate polynomials. Although conceptually simple, it involves non-elementary concepts of abstract algebra. The second proof is based on the linear algebra concepts of change of basis in a vector space and the determinant of a linear map. In the process, it computes the LU decomposition of the Vandermonde matrix. The third proof is more elementary but more complicated, using only elementary row and column operations.

First proof: Polynomial properties

[edit]

The first proof relies on properties of polynomials.

By the Leibniz formula, det ( V ) {\displaystyle \det(V)} {\displaystyle \det(V)} is a polynomial in the x i {\displaystyle x_{i}} {\displaystyle x_{i}}, with integer coefficients. All entries of the ( i − 1 ) {\displaystyle (i-1)} {\displaystyle (i-1)}-th column have total degree i {\displaystyle i} {\displaystyle i}. Thus, again by the Leibniz formula, all terms of the determinant have total degree

0 + 1 + 2 + ⋯ + n = n ( n + 1 ) 2 ; {\displaystyle 0+1+2+\cdots +n={\frac {n(n+1)}{2}};} {\displaystyle 0+1+2+\cdots +n={\frac {n(n+1)}{2}};}

(that is, the determinant is a homogeneous polynomial of this degree).

If, for i ≠ j {\displaystyle i\neq j} {\displaystyle i\neq j}, one substitutes x i {\displaystyle x_{i}} {\displaystyle x_{i}} for x j {\displaystyle x_{j}} {\displaystyle x_{j}}, one gets a matrix with two equal rows, which has thus a zero determinant. Thus, considering the determinant as univariate in x i , {\displaystyle x_{i},} {\displaystyle x_{i},} the factor theorem implies that x j − x i {\displaystyle x_{j}-x_{i}} {\displaystyle x_{j}-x_{i}} is a divisor of det ( V ) . {\displaystyle \det(V).} {\displaystyle \det(V).} It thus follows that for all i {\displaystyle i} {\displaystyle i} and j {\displaystyle j} {\displaystyle j}, x j − x i {\displaystyle x_{j}-x_{i}} {\displaystyle x_{j}-x_{i}} is a divisor of det ( V ) . {\displaystyle \det(V).} {\displaystyle \det(V).}

This will now be strengthened to show that the product of all those divisors of det ( V ) {\displaystyle \det(V)} {\displaystyle \det(V)} is a divisor of det ( V ) . {\displaystyle \det(V).} {\displaystyle \det(V).} Indeed, let p {\displaystyle p} {\displaystyle p} be a polynomial with x i − x j {\displaystyle x_{i}-x_{j}} {\displaystyle x_{i}-x_{j}} as a factor, then p = ( x i − x j ) q , {\displaystyle p=(x_{i}-x_{j})\,q,} {\displaystyle p=(x_{i}-x_{j})\,q,} for some polynomial q . {\displaystyle q.} {\displaystyle q.} If x k − x l {\displaystyle x_{k}-x_{l}} {\displaystyle x_{k}-x_{l}} is another factor of p , {\displaystyle p,} {\displaystyle p,} then p {\displaystyle p} {\displaystyle p} becomes zero after the substitution of x k {\displaystyle x_{k}} {\displaystyle x_{k}} for x l . {\displaystyle x_{l}.} {\displaystyle x_{l}.} If { x i , x j } ≠ { x k , x l } , {\displaystyle \{x_{i},x_{j}\}\neq \{x_{k},x_{l}\},} {\displaystyle \{x_{i},x_{j}\}\neq \{x_{k},x_{l}\},} the factor q {\displaystyle q} {\displaystyle q} becomes zero after this substitution, since the factor x i − x j {\displaystyle x_{i}-x_{j}} {\displaystyle x_{i}-x_{j}} remains nonzero. So, by the factor theorem, x k − x l {\displaystyle x_{k}-x_{l}} {\displaystyle x_{k}-x_{l}} divides q , {\displaystyle q,} {\displaystyle q,} and ( x i − x j ) ( x k − x l ) {\displaystyle (x_{i}-x_{j})\,(x_{k}-x_{l})} {\displaystyle (x_{i}-x_{j})\,(x_{k}-x_{l})} divides p . {\displaystyle p.} {\displaystyle p.}

Iterating this process by starting from det ( V ) , {\displaystyle \det(V),} {\displaystyle \det(V),} one gets that det ( V ) {\displaystyle \det(V)} {\displaystyle \det(V)} is divisible by the product of all x i − x j {\displaystyle x_{i}-x_{j}} {\displaystyle x_{i}-x_{j}} with i < j ; {\displaystyle i<j;} {\displaystyle i<j;} that is

det ( V ) = Q ∏ 0 ≤ i < j ≤ n ( x j − x i ) , {\displaystyle \det(V)=Q\prod _{0\leq i<j\leq n}(x_{j}-x_{i}),} {\displaystyle \det(V)=Q\prod _{0\leq i<j\leq n}(x_{j}-x_{i}),}

where Q {\displaystyle Q} {\displaystyle Q} is a polynomial. As the product of all x j − x i {\displaystyle x_{j}-x_{i}} {\displaystyle x_{j}-x_{i}} and det ( V ) {\displaystyle \det(V)} {\displaystyle \det(V)} have the same degree n ( n + 1 ) / 2 {\displaystyle n(n+1)/2} {\displaystyle n(n+1)/2}, the polynomial Q {\displaystyle Q} {\displaystyle Q} is, in fact, a constant. This constant is one, because the product of the diagonal entries of V {\displaystyle V} {\displaystyle V} is x 1 x 2 2 ⋯ x n n {\displaystyle x_{1}x_{2}^{2}\cdots x_{n}^{n}} {\displaystyle x_{1}x_{2}^{2}\cdots x_{n}^{n}}, which is also the monomial that is obtained by taking the first term of all factors in ∏ 0 ≤ i < j ≤ n ( x j − x i ) . {\displaystyle \textstyle \prod _{0\leq i<j\leq n}(x_{j}-x_{i}).} {\displaystyle \textstyle \prod _{0\leq i<j\leq n}(x_{j}-x_{i}).} This proves that Q = 1 , {\displaystyle Q=1,} {\displaystyle Q=1,} and finishes the proof.

det ( V ) = ∏ 0 ≤ i < j ≤ n ( x j − x i ) . {\displaystyle \det(V)=\prod _{0\leq i<j\leq n}(x_{j}-x_{i}).} {\displaystyle \det(V)=\prod _{0\leq i<j\leq n}(x_{j}-x_{i}).}

Second proof: linear maps

[edit]

Let F be a field containing all x i , {\displaystyle x_{i},} {\displaystyle x_{i},} and P n {\displaystyle P_{n}} {\displaystyle P_{n}} the F vector space of the polynomials of degree at most n with coefficients in F. Let

φ : P n → F n + 1 {\displaystyle \varphi :P_{n}\to F^{n+1}} {\displaystyle \varphi :P_{n}\to F^{n+1}}

be the linear map that maps every polynomial in P n {\displaystyle P_{n}} {\displaystyle P_{n}} to the ⁠ ( n + 1 ) {\displaystyle (n+1)} {\displaystyle (n+1)}⁠-tuple of its values at the x i , {\displaystyle x_{i},} {\displaystyle x_{i},} that is,

φ ( p ) ↦ ( p ( x 0 ) , p ( x 1 ) , … , p ( x n ) ) {\displaystyle \varphi (p)\mapsto (p(x_{0}),p(x_{1}),\ldots ,p(x_{n}))} {\displaystyle \varphi (p)\mapsto (p(x_{0}),p(x_{1}),\ldots ,p(x_{n}))}.

The Vandermonde matrix is the matrix of φ {\displaystyle \varphi } {\displaystyle \varphi } with respect to the canonical bases of P n {\displaystyle P_{n}} {\displaystyle P_{n}} and F n + 1 . {\displaystyle F^{n+1}.} {\displaystyle F^{n+1}.}

Changing the basis of P n {\displaystyle P_{n}} {\displaystyle P_{n}} amounts to multiplying the Vandermonde matrix by a change-of-basis matrix M (from the right). This does not change the determinant, if the determinant of M is 1.

The polynomials 1 {\displaystyle 1} {\displaystyle 1}, x − x 0 {\displaystyle x-x_{0}} {\displaystyle x-x_{0}}, ( x − x 0 ) ( x − x 1 ) {\displaystyle (x-x_{0})(x-x_{1})} {\displaystyle (x-x_{0})(x-x_{1})}, …, ( x − x 0 ) ( x − x 1 ) ⋯ ( x − x n − 1 ) {\displaystyle (x-x_{0})(x-x_{1})\cdots (x-x_{n-1})} {\displaystyle (x-x_{0})(x-x_{1})\cdots (x-x_{n-1})} are monic of respective degrees 0, 1, …, n. Their matrix on the monomial basis is an upper-triangular matrix U (if the monomials are ordered in increasing degrees), with all diagonal entries equal to one. This matrix is thus a change-of-basis matrix of determinant one. The matrix of φ {\displaystyle \varphi } {\displaystyle \varphi } on this new basis is

( 1 0 0 … 0 1 x 1 − x 0 0 … 0 1 x 2 − x 0 ( x 2 − x 0 ) ( x 2 − x 1 ) … 0 ⋮ ⋮ ⋮ ⋱ ⋮ 1 x n − x 0 ( x n − x 0 ) ( x n − x 1 ) … ( x n − x 0 ) ( x n − x 1 ) ⋯ ( x n − x n − 1 ) ) {\displaystyle {\begin{pmatrix}1&0&0&\ldots &0\\1&x_{1}-x_{0}&0&\ldots &0\\1&x_{2}-x_{0}&(x_{2}-x_{0})(x_{2}-x_{1})&\ldots &0\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&x_{n}-x_{0}&(x_{n}-x_{0})(x_{n}-x_{1})&\ldots &(x_{n}-x_{0})(x_{n}-x_{1})\cdots (x_{n}-x_{n-1})\end{pmatrix}}} {\displaystyle {\begin{pmatrix}1&0&0&\ldots &0\\1&x_{1}-x_{0}&0&\ldots &0\\1&x_{2}-x_{0}&(x_{2}-x_{0})(x_{2}-x_{1})&\ldots &0\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&x_{n}-x_{0}&(x_{n}-x_{0})(x_{n}-x_{1})&\ldots &(x_{n}-x_{0})(x_{n}-x_{1})\cdots (x_{n}-x_{n-1})\end{pmatrix}}}.

Thus Vandermonde determinant equals the determinant of this matrix, which is the product of its diagonal entries.

This proves the desired equality. Moreover, one gets the LU decomposition of V as V = L U − 1 {\displaystyle V=LU^{-1}} {\displaystyle V=LU^{-1}}.

Third proof: row and column operations

[edit]

The third proof is based on the fact that if one adds to a column of a matrix the product by a scalar of another column then the determinant remains unchanged.

So, by subtracting to each column – except the first one – the preceding column multiplied by x 0 {\displaystyle x_{0}} {\displaystyle x_{0}}, the determinant is not changed. (These subtractions must be done by starting from last columns, for subtracting a column that has not yet been changed). This gives the matrix

V = ( 1 0 0 0 ⋯ 0 1 x 1 − x 0 x 1 ( x 1 − x 0 ) x 1 2 ( x 1 − x 0 ) ⋯ x 1 n − 1 ( x 1 − x 0 ) 1 x 2 − x 0 x 2 ( x 2 − x 0 ) x 2 2 ( x 2 − x 0 ) ⋯ x 2 n − 1 ( x 2 − x 0 ) ⋮ ⋮ ⋮ ⋮ ⋱ ⋮ 1 x n − x 0 x n ( x n − x 0 ) x n 2 ( x n − x 0 ) ⋯ x n n − 1 ( x n − x 0 ) ) {\displaystyle V={\begin{pmatrix}1&0&0&0&\cdots &0\\1&x_{1}-x_{0}&x_{1}(x_{1}-x_{0})&x_{1}^{2}(x_{1}-x_{0})&\cdots &x_{1}^{n-1}(x_{1}-x_{0})\\1&x_{2}-x_{0}&x_{2}(x_{2}-x_{0})&x_{2}^{2}(x_{2}-x_{0})&\cdots &x_{2}^{n-1}(x_{2}-x_{0})\\\vdots &\vdots &\vdots &\vdots &\ddots &\vdots \\1&x_{n}-x_{0}&x_{n}(x_{n}-x_{0})&x_{n}^{2}(x_{n}-x_{0})&\cdots &x_{n}^{n-1}(x_{n}-x_{0})\\\end{pmatrix}}} {\displaystyle V={\begin{pmatrix}1&0&0&0&\cdots &0\\1&x_{1}-x_{0}&x_{1}(x_{1}-x_{0})&x_{1}^{2}(x_{1}-x_{0})&\cdots &x_{1}^{n-1}(x_{1}-x_{0})\\1&x_{2}-x_{0}&x_{2}(x_{2}-x_{0})&x_{2}^{2}(x_{2}-x_{0})&\cdots &x_{2}^{n-1}(x_{2}-x_{0})\\\vdots &\vdots &\vdots &\vdots &\ddots &\vdots \\1&x_{n}-x_{0}&x_{n}(x_{n}-x_{0})&x_{n}^{2}(x_{n}-x_{0})&\cdots &x_{n}^{n-1}(x_{n}-x_{0})\\\end{pmatrix}}}

Applying the Laplace expansion formula along the first row, we obtain det ( V ) = det ( B ) {\displaystyle \det(V)=\det(B)} {\displaystyle \det(V)=\det(B)}, with

B = ( x 1 − x 0 x 1 ( x 1 − x 0 ) x 1 2 ( x 1 − x 0 ) ⋯ x 1 n − 1 ( x 1 − x 0 ) x 2 − x 0 x 2 ( x 2 − x 0 ) x 2 2 ( x 2 − x 0 ) ⋯ x 2 n − 1 ( x 2 − x 0 ) ⋮ ⋮ ⋮ ⋱ ⋮ x n − x 0 x n ( x n − x 0 ) x n 2 ( x n − x 0 ) ⋯ x n n − 1 ( x n − x 0 ) ) {\displaystyle B={\begin{pmatrix}x_{1}-x_{0}&x_{1}(x_{1}-x_{0})&x_{1}^{2}(x_{1}-x_{0})&\cdots &x_{1}^{n-1}(x_{1}-x_{0})\\x_{2}-x_{0}&x_{2}(x_{2}-x_{0})&x_{2}^{2}(x_{2}-x_{0})&\cdots &x_{2}^{n-1}(x_{2}-x_{0})\\\vdots &\vdots &\vdots &\ddots &\vdots \\x_{n}-x_{0}&x_{n}(x_{n}-x_{0})&x_{n}^{2}(x_{n}-x_{0})&\cdots &x_{n}^{n-1}(x_{n}-x_{0})\\\end{pmatrix}}} {\displaystyle B={\begin{pmatrix}x_{1}-x_{0}&x_{1}(x_{1}-x_{0})&x_{1}^{2}(x_{1}-x_{0})&\cdots &x_{1}^{n-1}(x_{1}-x_{0})\\x_{2}-x_{0}&x_{2}(x_{2}-x_{0})&x_{2}^{2}(x_{2}-x_{0})&\cdots &x_{2}^{n-1}(x_{2}-x_{0})\\\vdots &\vdots &\vdots &\ddots &\vdots \\x_{n}-x_{0}&x_{n}(x_{n}-x_{0})&x_{n}^{2}(x_{n}-x_{0})&\cdots &x_{n}^{n-1}(x_{n}-x_{0})\\\end{pmatrix}}}

As all the entries in the i {\displaystyle i} {\displaystyle i}-th row of B {\displaystyle B} {\displaystyle B} have a factor of x i + 1 − x 0 {\displaystyle x_{i+1}-x_{0}} {\displaystyle x_{i+1}-x_{0}}, one can take these factors out and obtain

det ( V ) = ( x 1 − x 0 ) ( x 2 − x 0 ) ⋯ ( x n − x 0 ) | 1 x 1 x 1 2 ⋯ x 1 n − 1 1 x 2 x 2 2 ⋯ x 2 n − 1 ⋮ ⋮ ⋮ ⋱ ⋮ 1 x n x n 2 ⋯ x n n − 1 | = ∏ 1 < i ≤ n ( x i − x 0 ) det ( V ′ ) {\displaystyle \det(V)=(x_{1}-x_{0})(x_{2}-x_{0})\cdots (x_{n}-x_{0}){\begin{vmatrix}1&x_{1}&x_{1}^{2}&\cdots &x_{1}^{n-1}\\1&x_{2}&x_{2}^{2}&\cdots &x_{2}^{n-1}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&x_{n}&x_{n}^{2}&\cdots &x_{n}^{n-1}\\\end{vmatrix}}=\prod _{1<i\leq n}(x_{i}-x_{0})\det(V')} {\displaystyle \det(V)=(x_{1}-x_{0})(x_{2}-x_{0})\cdots (x_{n}-x_{0}){\begin{vmatrix}1&x_{1}&x_{1}^{2}&\cdots &x_{1}^{n-1}\\1&x_{2}&x_{2}^{2}&\cdots &x_{2}^{n-1}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&x_{n}&x_{n}^{2}&\cdots &x_{n}^{n-1}\\\end{vmatrix}}=\prod _{1<i\leq n}(x_{i}-x_{0})\det(V')},

where V ′ {\displaystyle V'} {\displaystyle V'} is a Vandermonde matrix in x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} {\displaystyle x_{1},\ldots ,x_{n}}. Iterating this process on this smaller Vandermonde matrix, one eventually gets the desired expression of det ( V ) {\displaystyle \det(V)} {\displaystyle \det(V)} as the product of all x j − x i {\displaystyle x_{j}-x_{i}} {\displaystyle x_{j}-x_{i}} such that i < j {\displaystyle i<j} {\displaystyle i<j}.

Rank of the Vandermonde matrix

[edit]
  • An m × n rectangular Vandermonde matrix such that m ≤ n has rank m if and only if all xi are distinct.
  • An m × n rectangular Vandermonde matrix such that m ≥ n has rank n if and only if there are n of the xi that are distinct.
  • A square Vandermonde matrix is invertible if and only if the xi are distinct. An explicit formula for the inverse is known (see below).[9][3]

Generalizations

[edit]

If the columns of the Vandermonde matrix, instead of 1 , x , x 2 , . . . {\textstyle 1,x,x^{2},...} {\textstyle 1,x,x^{2},...}, are general polynomials p 0 , p 1 , . . . , p n {\textstyle p_{0},p_{1},...,p_{n}} {\textstyle p_{0},p_{1},...,p_{n}}, such that each one has degree 0 , 1 , . . . , n {\textstyle 0,1,...,n} {\textstyle 0,1,...,n}, that is, if V = [ p i ( x j ) ] i , j ∈ 0 : n {\displaystyle V=[p_{i}(x_{j})]_{i,j\in 0:n}} {\displaystyle V=[p_{i}(x_{j})]_{i,j\in 0:n}}, det V ( x 0 : n ) = ( ∏ k c k ) Δ ( x ) {\displaystyle \det V(x_{0:n})=\left(\prod _{k}c_{k}\right)\Delta (x)} {\displaystyle \det V(x_{0:n})=\left(\prod _{k}c_{k}\right)\Delta (x)}where c 0 , . . . , c n {\textstyle c_{0},...,c_{n}} {\textstyle c_{0},...,c_{n}} are the head coefficients of p 0 , p 1 , . . . , p n {\textstyle p_{0},p_{1},...,p_{n}} {\textstyle p_{0},p_{1},...,p_{n}}, and Δ ( x ) = ∏ 0 ≤ i < j ≤ n ( x j − x i ) {\displaystyle \Delta (x)=\prod _{0\leq i<j\leq n}(x_{j}-x_{i})} {\displaystyle \Delta (x)=\prod _{0\leq i<j\leq n}(x_{j}-x_{i})} is the Vandermonde determinant.

Proof

det V ( x 0 : n ) {\textstyle \det V(x_{0:n})} {\textstyle \det V(x_{0:n})} is zero whenever x j = x k {\textstyle x_{j}=x_{k}} {\textstyle x_{j}=x_{k}}, and has degree 1 2 n ( n + 1 ) {\textstyle {\frac {1}{2}}n(n+1)} {\textstyle {\frac {1}{2}}n(n+1)}, so it is a multiple of ∏ i < j ( x j − x i ) {\textstyle \prod _{i<j}(x_{j}-x_{i})} {\textstyle \prod _{i<j}(x_{j}-x_{i})}. To find the constant in front, simply calculate the coefficient of the term x 0 0 … x n n {\textstyle x_{0}^{0}\dots x_{n}^{n}} {\textstyle x_{0}^{0}\dots x_{n}^{n}}, which is c 0 … c n {\textstyle c_{0}\dots c_{n}} {\textstyle c_{0}\dots c_{n}}.

By multiplying with the Hermitian conjugate, we find that det [ ∑ l p j ( z l ) p k ( z l ) ∗ ] = ( ∏ k | c k | 2 ) | Δ ( z ) | 2 = det [ ∑ l p l ( z j ) p l ( z k ) ∗ ] {\displaystyle \det \left[\sum _{l}p_{j}(z_{l})p_{k}(z_{l})^{*}\right]=\left(\prod _{k}|c_{k}|^{2}\right)|\Delta (z)|^{2}=\det \left[\sum _{l}p_{l}(z_{j})p_{l}(z_{k})^{*}\right]} {\displaystyle \det \left[\sum _{l}p_{j}(z_{l})p_{k}(z_{l})^{*}\right]=\left(\prod _{k}|c_{k}|^{2}\right)|\Delta (z)|^{2}=\det \left[\sum _{l}p_{l}(z_{j})p_{l}(z_{k})^{*}\right]}

Theorem (Tao 2012, page 251[10])—Fix x {\textstyle x} {\textstyle x}, and at the y → 0 {\textstyle y\to 0} {\textstyle y\to 0} limit, det [ e x i y j ] = 1 1 ! … n ! Δ ( x ) Δ ( y ) + o ( Δ ( y ) ) {\displaystyle \det[e^{x_{i}y_{j}}]={\frac {1}{1!\dots n!}}\Delta (x)\Delta (y)+o(\Delta (y))} {\displaystyle \det[e^{x_{i}y_{j}}]={\frac {1}{1!\dots n!}}\Delta (x)\Delta (y)+o(\Delta (y))} uniformly for y {\textstyle y} {\textstyle y}

Proof
Proof

If any x j = x k {\textstyle x_{j}=x_{k}} {\textstyle x_{j}=x_{k}} or y j = y k {\textstyle y_{j}=y_{k}} {\textstyle y_{j}=y_{k}}, then the determinant is zero, so it has the form det [ e x i y j ] = C ( x , y ) Δ ( x ) Δ ( y ) {\displaystyle \det[e^{x_{i}y_{j}}]=C(x,y)\Delta (x)\Delta (y)} {\displaystyle \det[e^{x_{i}y_{j}}]=C(x,y)\Delta (x)\Delta (y)} where C ( x , y ) {\textstyle C(x,y)} {\textstyle C(x,y)} is some power series in x , y {\textstyle x,y} {\textstyle x,y}.

The left side is a sum of the form ∑ σ ( − 1 ) | σ | e ∑ i x i y σ ( i ) {\displaystyle \sum _{\sigma }(-1)^{|\sigma |}e^{\sum _{i}x_{i}y_{\sigma (i)}}} {\displaystyle \sum _{\sigma }(-1)^{|\sigma |}e^{\sum _{i}x_{i}y_{\sigma (i)}}} Expand them by Taylor expansion. For fixed x {\textstyle x} {\textstyle x}, the series is uniformly convergent in y {\textstyle y} {\textstyle y} in a neighborhood of zero.

To find the constant term of C ( x , y ) {\textstyle C(x,y)} {\textstyle C(x,y)}, simply calculate the coefficient of the term x 0 0 y 0 0 … x n n y n n {\textstyle x_{0}^{0}y_{0}^{0}\dots x_{n}^{n}y_{n}^{n}} {\textstyle x_{0}^{0}y_{0}^{0}\dots x_{n}^{n}y_{n}^{n}}, which is 1 1 ! ⋯ n ! {\textstyle {\frac {1}{1!\cdots n!}}} {\textstyle {\frac {1}{1!\cdots n!}}}.

By the symmetry of the determinant, the next lowest-powered term of C ( x , y ) {\textstyle C(x,y)} {\textstyle C(x,y)} is of form a ( x 0 y 0 + ⋯ + x n y n ) {\textstyle a(x_{0}y_{0}+\dots +x_{n}y_{n})} {\textstyle a(x_{0}y_{0}+\dots +x_{n}y_{n})}, which is o ( 1 ) {\textstyle o(1)} {\textstyle o(1)} as y → 0 {\textstyle y\to 0} {\textstyle y\to 0}.

Inverse Vandermonde matrix

[edit]

As explained above in Applications, the polynomial interpolation problem for p ( x ) = a 0 + a 1 x + a 2 x 2 + ⋯ + a n x n {\displaystyle p(x)=a_{0}+a_{1}x+a_{2}x^{2}+\dots +a_{n}x^{n}} {\displaystyle p(x)=a_{0}+a_{1}x+a_{2}x^{2}+\dots +a_{n}x^{n}}satisfying p ( x 0 ) = y 0 , … , p ( x n ) = y n {\displaystyle p(x_{0})=y_{0},\ldots ,p(x_{n})=y_{n}} {\displaystyle p(x_{0})=y_{0},\ldots ,p(x_{n})=y_{n}} is equivalent to the matrix equation V a = y {\displaystyle Va=y} {\displaystyle Va=y}, which has the unique solution a = V − 1 y {\displaystyle a=V^{-1}y} {\displaystyle a=V^{-1}y}. There are other known formulas which solve the interpolation problem, which must be equivalent to the unique a = V − 1 y {\displaystyle a=V^{-1}y} {\displaystyle a=V^{-1}y}, so they must give explicit formulas for the inverse matrix V − 1 {\displaystyle V^{-1}} {\displaystyle V^{-1}}. In particular, Lagrange interpolation shows that the columns of the inverse matrix

V − 1 = ( 1 x 0 … x 0 n ⋮ ⋮ ⋮ 1 x n … x n n ) − 1 = L = ( L 00 ⋯ L 0 n ⋮ ⋮ L n 0 ⋯ L n n ) {\displaystyle V^{-1}={\begin{pmatrix}1&x_{0}&\dots &x_{0}^{n}\\\vdots &\vdots &&\vdots \\[.5em]1&x_{n}&\dots &x_{n}^{n}\end{pmatrix}}^{-1}=L={\begin{pmatrix}L_{00}&\!\!\!\!\cdots \!\!\!\!&L_{0n}\\\vdots &&\vdots \\L_{n0}&\!\!\!\!\cdots \!\!\!\!&L_{nn}\end{pmatrix}}} {\displaystyle V^{-1}={\begin{pmatrix}1&x_{0}&\dots &x_{0}^{n}\\\vdots &\vdots &&\vdots \\[.5em]1&x_{n}&\dots &x_{n}^{n}\end{pmatrix}}^{-1}=L={\begin{pmatrix}L_{00}&\!\!\!\!\cdots \!\!\!\!&L_{0n}\\\vdots &&\vdots \\L_{n0}&\!\!\!\!\cdots \!\!\!\!&L_{nn}\end{pmatrix}}}

are the coefficients of the Lagrange polynomials

L j ( x ) = L 0 j + L 1 j x + ⋯ + L n j x n = ∏ 0 ≤ i ≤ n i ≠ j x − x i x j − x i = f ( x ) ( x − x j ) f ′ ( x j ) , {\displaystyle L_{j}(x)=L_{0j}+L_{1j}x+\cdots +L_{nj}x^{n}=\prod _{0\leq i\leq n \atop i\neq j}{\frac {x-x_{i}}{x_{j}-x_{i}}}={\frac {f(x)}{(x-x_{j})\,f'(x_{j})}}\,,} {\displaystyle L_{j}(x)=L_{0j}+L_{1j}x+\cdots +L_{nj}x^{n}=\prod _{0\leq i\leq n \atop i\neq j}{\frac {x-x_{i}}{x_{j}-x_{i}}}={\frac {f(x)}{(x-x_{j})\,f'(x_{j})}}\,,}

where f ( x ) = ( x − x 0 ) ⋯ ( x − x n ) {\displaystyle f(x)=(x-x_{0})\cdots (x-x_{n})} {\displaystyle f(x)=(x-x_{0})\cdots (x-x_{n})}. This is easily demonstrated: the polynomials clearly satisfy L j ( x i ) = 0 {\displaystyle L_{j}(x_{i})=0} {\displaystyle L_{j}(x_{i})=0} for i ≠ j {\displaystyle i\neq j} {\displaystyle i\neq j} while L j ( x j ) = 1 {\displaystyle L_{j}(x_{j})=1} {\displaystyle L_{j}(x_{j})=1}, so we may compute the product V L = [ L j ( x i ) ] i , j = 0 n = I {\displaystyle VL=[L_{j}(x_{i})]_{i,j=0}^{n}=I} {\displaystyle VL=[L_{j}(x_{i})]_{i,j=0}^{n}=I}, the identity matrix.

Confluent Vandermonde matrices

[edit]
See also: Hermite interpolation

As described before, a Vandermonde matrix describes the linear algebra interpolation problem of finding the coefficients of a polynomial p ( x ) {\displaystyle p(x)} {\displaystyle p(x)} of degree n − 1 {\displaystyle n-1} {\displaystyle n-1} based on the values p ( x 1 ) , . . . , p ( x n ) {\displaystyle p(x_{1}),\,...,\,p(x_{n})} {\displaystyle p(x_{1}),\,...,\,p(x_{n})}, where x 1 , . . . , x n {\displaystyle x_{1},\,...,\,x_{n}} {\displaystyle x_{1},\,...,\,x_{n}} are distinct points. If x i {\displaystyle x_{i}} {\displaystyle x_{i}} are not distinct, then this problem does not have a unique solution (and the corresponding Vandermonde matrix is singular). However, if we specify the values of the derivatives at the repeated points, then the problem can have a unique solution. For example, the problem

{ p ( 0 ) = y 1 p ′ ( 0 ) = y 2 p ( 1 ) = y 3 {\displaystyle {\begin{cases}p(0)=y_{1}\\p'(0)=y_{2}\\p(1)=y_{3}\end{cases}}} {\displaystyle {\begin{cases}p(0)=y_{1}\\p'(0)=y_{2}\\p(1)=y_{3}\end{cases}}}

where p ( x ) = a x 2 + b x + c {\displaystyle p(x)=ax^{2}+bx+c} {\displaystyle p(x)=ax^{2}+bx+c}, has a unique solution for all y 1 , y 2 , y 3 {\displaystyle y_{1},y_{2},y_{3}} {\displaystyle y_{1},y_{2},y_{3}} with y 1 ≠ y 3 {\displaystyle y_{1}\neq y_{3}} {\displaystyle y_{1}\neq y_{3}}. In general, suppose that x 1 , x 2 , . . . , x n {\displaystyle x_{1},x_{2},...,x_{n}} {\displaystyle x_{1},x_{2},...,x_{n}} are (not necessarily distinct) numbers, and suppose for simplicity that equal values are adjacent:

x 1 = ⋯ = x m 1 ,   x m 1 + 1 = ⋯ = x m 2 ,   … ,   x m k − 1 + 1 = ⋯ = x m k {\displaystyle x_{1}=\cdots =x_{m_{1}},\ x_{m_{1}+1}=\cdots =x_{m_{2}},\ \ldots ,\ x_{m_{k-1}+1}=\cdots =x_{m_{k}}} {\displaystyle x_{1}=\cdots =x_{m_{1}},\ x_{m_{1}+1}=\cdots =x_{m_{2}},\ \ldots ,\ x_{m_{k-1}+1}=\cdots =x_{m_{k}}}

where m 1 < m 2 < ⋯ < m k = n , {\displaystyle m_{1}<m_{2}<\cdots <m_{k}=n,} {\displaystyle m_{1}<m_{2}<\cdots <m_{k}=n,} and x m 1 , … , x m k {\displaystyle x_{m_{1}},\ldots ,x_{m_{k}}} {\displaystyle x_{m_{1}},\ldots ,x_{m_{k}}} are distinct. Then the corresponding interpolation problem is

{ p ( x m 1 ) = y 1 , p ′ ( x m 1 ) = y 2 , … , p ( m 1 − 1 ) ( x m 1 ) = y m 1 , p ( x m 2 ) = y m 1 + 1 , p ′ ( x m 2 ) = y m 1 + 2 , … , p ( m 2 − m 1 − 1 ) ( x m 2 ) = y m 2 , ⋮ ⋮ p ( x m k ) = y m k − 1 + 1 , p ′ ( x m k ) = y m k − 1 + 2 , … , p ( m k − m k − 1 − 1 ) ( x m k ) = y m k . {\displaystyle {\begin{cases}p(x_{m_{1}})=y_{1},&p'(x_{m_{1}})=y_{2},&\ldots ,&p^{(m_{1}-1)}(x_{m_{1}})=y_{m_{1}},\\p(x_{m_{2}})=y_{m_{1}+1},&p'(x_{m_{2}})=y_{m_{1}+2},&\ldots ,&p^{(m_{2}-m_{1}-1)}(x_{m_{2}})=y_{m_{2}},\\\qquad \vdots &&&\qquad \vdots \\p(x_{m_{k}})=y_{m_{k-1}+1},&p'(x_{m_{k}})=y_{m_{k-1}+2},&\ldots ,&p^{(m_{k}-m_{k-1}-1)}(x_{m_{k}})=y_{m_{k}}.\end{cases}}} {\displaystyle {\begin{cases}p(x_{m_{1}})=y_{1},&p'(x_{m_{1}})=y_{2},&\ldots ,&p^{(m_{1}-1)}(x_{m_{1}})=y_{m_{1}},\\p(x_{m_{2}})=y_{m_{1}+1},&p'(x_{m_{2}})=y_{m_{1}+2},&\ldots ,&p^{(m_{2}-m_{1}-1)}(x_{m_{2}})=y_{m_{2}},\\\qquad \vdots &&&\qquad \vdots \\p(x_{m_{k}})=y_{m_{k-1}+1},&p'(x_{m_{k}})=y_{m_{k-1}+2},&\ldots ,&p^{(m_{k}-m_{k-1}-1)}(x_{m_{k}})=y_{m_{k}}.\end{cases}}}

The corresponding matrix for this problem is called a confluent Vandermonde matrix, given as follows.[11] If 1 ≤ i , j ≤ n {\displaystyle 1\leq i,j\leq n} {\displaystyle 1\leq i,j\leq n}, then m ℓ < i ≤ m ℓ + 1 {\displaystyle m_{\ell }<i\leq m_{\ell +1}} {\displaystyle m_{\ell }<i\leq m_{\ell +1}} for a unique 0 ≤ ℓ ≤ k − 1 {\displaystyle 0\leq \ell \leq k-1} {\displaystyle 0\leq \ell \leq k-1} (denoting m 0 = 0 {\displaystyle m_{0}=0} {\displaystyle m_{0}=0}). We let

V i , j = { 0 if  j < i − m ℓ , ( j − 1 ) ! ( j − ( i − m ℓ ) ) ! x i j − ( i − m ℓ ) if  j ≥ i − m ℓ . {\displaystyle V_{i,j}={\begin{cases}0&{\text{if }}j<i-m_{\ell },\\[6pt]{\dfrac {(j-1)!}{(j-(i-m_{\ell }))!}}x_{i}^{j-(i-m_{\ell })}&{\text{if }}j\geq i-m_{\ell }.\end{cases}}} {\displaystyle V_{i,j}={\begin{cases}0&{\text{if }}j<i-m_{\ell },\\[6pt]{\dfrac {(j-1)!}{(j-(i-m_{\ell }))!}}x_{i}^{j-(i-m_{\ell })}&{\text{if }}j\geq i-m_{\ell }.\end{cases}}}

This generalization of the Vandermonde matrix makes it non-singular, so that there exists a unique solution to the system of equations, and it possesses most of the other properties of the Vandermonde matrix. Its rows are derivatives (of some order) of the original Vandermonde rows. For review of algorithms inverting confluent Vandermonde matrix look ch. 1.9, page 54, of book [12].

Another way to derive the above formula is by taking a limit of the Vandermonde matrix as the x i {\displaystyle x_{i}} {\displaystyle x_{i}}'s approach each other. For example, to get the case of x 1 = x 2 {\displaystyle x_{1}=x_{2}} {\displaystyle x_{1}=x_{2}}, take subtract the first row from second in the original Vandermonde matrix, and let x 2 → x 1 {\displaystyle x_{2}\to x_{1}} {\displaystyle x_{2}\to x_{1}}: this yields the corresponding row in the confluent Vandermonde matrix. This derives the generalized interpolation problem with given values and derivatives as a limit of the original case with distinct points: giving p ( x i ) , p ′ ( x i ) {\displaystyle p(x_{i}),p'(x_{i})} {\displaystyle p(x_{i}),p'(x_{i})} is similar to giving p ( x i ) , p ( x i + ε ) {\displaystyle p(x_{i}),p(x_{i}+\varepsilon )} {\displaystyle p(x_{i}),p(x_{i}+\varepsilon )} for small ε {\displaystyle \varepsilon } {\displaystyle \varepsilon }. Geometers have studied the problem of tracking confluent points along their tangent lines, known as compactification of configuration space.

See also

[edit]
  • Companion matrix § Diagonalizability
  • Schur polynomial – a generalization
  • Alternant matrix
  • Lagrange polynomial
  • Wronskian
  • List of matrices
  • Moore determinant over a finite field
  • Vieta's formulas

References

[edit]
  1. ^ Roger A. Horn and Charles R. Johnson (1991), Topics in matrix analysis, Cambridge University Press. See Section 6.1.
  2. ^ a b Golub, Gene H.; Van Loan, Charles F. (2013). Matrix Computations (4th ed.). The Johns Hopkins University Press. pp. 203–207. ISBN 978-1-4214-0859-0.
  3. ^ a b Macon, N.; A. Spitzbart (February 1958). "Inverses of Vandermonde Matrices". The American Mathematical Monthly. 65 (2): 95–100. doi:10.2307/2308881. JSTOR 2308881.
  4. ^ Björck, Åke; Pereyra, Victor Daniel (October 1970). "Solution of Vandermonde Systems of Equations" (PDF). American Mathematical Society. 24 (112): 893–903. doi:10.1090/S0025-5718-1970-0290541-1. S2CID 122006253.
  5. ^ Inverse of Vandermonde Matrix (2018), https://proofwiki.org/wiki/Inverse_of_Vandermonde_Matrix
  6. ^ Bostan, A.; Jeannerod, C.-P.; Schost, É. (2008). "Solving structured linear systems with large displacement rank". Theoretical Computer Science. 407 (1–3): 155–181. doi:10.1016/j.tcs.2008.05.014.
  7. ^ Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103. Lecture 4 reviews the representation theory of symmetric groups, including the role of the Vandermonde determinant.
  8. ^ Gauthier, J. "Fast Multipoint Evaluation On n Arbitrary Points." Simon Fraser University, Tech. Rep (2017).
  9. ^ Turner, L. Richard (August 1966). Inverse of the Vandermonde matrix with applications (PDF).
  10. ^ Tao, Terence (2012). Topics in random matrix theory. Graduate studies in mathematics. Providence, R.I: American Mathematical Society. ISBN 978-0-8218-7430-1.
  11. ^ Kalman, D. (1984). "The Generalized Vandermonde Matrix". Mathematics Magazine. 57 (1): 15–21. doi:10.1080/0025570X.1984.11977069.
  12. ^ Meurant, G. (2025). Hessenberg and Tridiagonal Matrices. SIAM.

Further reading

[edit]
  • Ycart, Bernard (2013), "A case of mathematical eponymy: the Vandermonde determinant", Revue d'Histoire des Mathématiques, 13, arXiv:1204.4716, Bibcode:2012arXiv1204.4716Y.
  • v
  • t
  • e
Matrix classes
Explicitly constrained entries
  • Alternant
  • Anti-diagonal
  • Anti-Hermitian
  • Anti-symmetric
  • Arrowhead
  • Band
  • Bidiagonal
  • Bisymmetric
  • Block-diagonal
  • Block
  • Block tridiagonal
  • Boolean
  • Cauchy
  • Centrosymmetric
  • Conference
  • Complex Hadamard
  • Copositive
  • Diagonally dominant
  • Diagonal
  • Discrete Fourier Transform
  • Elementary
  • Equivalent
  • Frobenius
  • Generalized permutation
  • Hadamard
  • Hankel
  • Hermitian
  • Hessenberg
  • Hollow
  • Integer
  • Logical
  • Matrix unit
  • Metzler
  • Moore
  • Nonnegative
  • Pentadiagonal
  • Permutation
  • Persymmetric
  • Polynomial
  • Quaternionic
  • Signature
  • Skew-Hermitian
  • Skew-symmetric
  • Skyline
  • Sparse
  • Sylvester
  • Symmetric
  • Toeplitz
  • Triangular
  • Tridiagonal
  • Vandermonde
  • Walsh
  • Z
Constant
  • Exchange
  • Hilbert
  • Identity
  • Lehmer
  • Of ones
  • Pascal
  • Pauli
  • Redheffer
  • Shift
  • Zero
Conditions on eigenvalues or eigenvectors
  • Companion
  • Convergent
  • Defective
  • Definite
  • Diagonalizable
  • Hurwitz-stable
  • Positive-definite
  • Stieltjes
Satisfying conditions on products or inverses
  • Congruent
  • Idempotent or Projection
  • Invertible
  • Involutory
  • Nilpotent
  • Normal
  • Orthogonal
  • Unimodular
  • Unipotent
  • Unitary
  • Totally unimodular
  • Weighing
With specific applications
  • Adjugate
  • Alternating sign
  • Augmented
  • Bézout
  • Jabotinsky
  • Cartan
  • Circulant
  • Cofactor
  • Commutation
  • Confusion
  • Coxeter
  • Distance
  • Duplication and elimination
  • Euclidean distance
  • Fundamental (linear differential equation)
  • Generator
  • Gram
  • Hessian
  • Householder
  • Jacobian
  • Moment
  • Payoff
  • Pick
  • Random
  • Rotation
  • Routh-Hurwitz
  • Seifert
  • Shear
  • Similarity
  • Symplectic
  • Totally positive
  • Transformation
Used in statistics
  • Centering
  • Correlation
  • Covariance
  • Design
  • Doubly stochastic
  • Fisher information
  • Hat
  • Precision
  • Stochastic
  • Transition
Used in graph theory
  • Adjacency
  • Biadjacency
  • Degree
  • Edmonds
  • Incidence
  • Laplacian
  • Seidel adjacency
  • Tutte
Used in science and engineering
  • Cabibbo–Kobayashi–Maskawa
  • Density
  • Fundamental (computer vision)
  • Fuzzy associative
  • Gamma
  • Gell-Mann
  • Hamiltonian
  • Irregular
  • Overlap
  • S
  • State transition
  • Substitution
  • Z (chemistry)
Related terms
  • Jordan normal form
  • Linear independence
  • Matrix exponential
  • Matrix representation of conic sections
  • Perfect matrix
  • Pseudoinverse
  • Row echelon form
  • Wronskian
  • icon Mathematics portal
  • List of matrices
  • Category:Matrices (mathematics)
Retrieved from "https://teknopedia.ac.id/w/index.php?title=Vandermonde_matrix&oldid=1341525687"
Categories:
  • Matrices (mathematics)
  • Determinants
  • Numerical linear algebra
Hidden categories:
  • Articles with short description
  • Short description is different from Wikidata

  • indonesia
  • Polski
  • العربية
  • Deutsch
  • English
  • Español
  • Français
  • Italiano
  • مصرى
  • Nederlands
  • 日本語
  • Português
  • Sinugboanong Binisaya
  • Svenska
  • Українська
  • Tiếng Việt
  • Winaray
  • 中文
  • Русский
Sunting pranala
url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url url
Pusat Layanan

UNIVERSITAS TEKNOKRAT INDONESIA | ASEAN's Best Private University
Jl. ZA. Pagar Alam No.9 -11, Labuhan Ratu, Kec. Kedaton, Kota Bandar Lampung, Lampung 35132
Phone: (0721) 702022
Email: pmb@teknokrat.ac.id