Similar Matrices & Diagonalization: The Power Technique
Introduction
When JEE asks you to find A 100 A^{100} A 100 or evaluate complex matrix expressions, diagonalization is often the most elegant approach. Similar matrices share fundamental properties, and diagonalizable matrices make power calculations trivial.
Part I: Similar Matrices
1.1 Definition
Two matrices A A A and B B B are similar if there exists an invertible matrix P P P such that:
B = P − 1 A P B = P^{-1}AP B = P − 1 A P
We write A ∼ B A \sim B A ∼ B .
1.2 The Fundamental Theorem
If A ∼ B A \sim B A ∼ B , they share ALL of the following properties:
Shared Property Why It's Shared Determinant ∥ P − 1 A P ∥ = ∥ P − 1 ∥ ∥ A ∥ ∥ P ∥ = ∥ A ∥ \|P^{-1}AP\| = \|P^{-1}\|\|A\|\|P\| = \|A\| ∥ P − 1 A P ∥ = ∥ P − 1 ∥∥ A ∥∥ P ∥ = ∥ A ∥ Trace tr ( P − 1 A P ) = tr ( A P P − 1 ) = tr ( A ) \text{tr}(P^{-1}AP) = \text{tr}(APP^{-1}) = \text{tr}(A) tr ( P − 1 A P ) = tr ( A P P − 1 ) = tr ( A ) Eigenvalues Same characteristic polynomial Characteristic Polynomial ∥ P − 1 A P − λ I ∥ = ∥ P − 1 ( A − λ I ) P ∥ = ∥ A − λ I ∥ \|P^{-1}AP - \lambda I\| = \|P^{-1}(A-\lambda I)P\| = \|A - \lambda I\| ∥ P − 1 A P − λ I ∥ = ∥ P − 1 ( A − λ I ) P ∥ = ∥ A − λ I ∥ Rank Invertible P P P preserves rank Nullity Since rank is preserved Algebraic Multiplicities Same char. polynomial Minimal Polynomial Same minimal polynomial
1.3 What's NOT Necessarily the Same
Property Counter-example Eigenvectors If A v = λ v Av = \lambda v A v = λ v , then B ( P − 1 v ) = λ ( P − 1 v ) B(P^{-1}v) = \lambda(P^{-1}v) B ( P − 1 v ) = λ ( P − 1 v ) Matrix entries Obviously different Symmetry A A A symmetric doesn't mean B B B is
1.4 Similarity is an Equivalence Relation
Reflexive: A ∼ A A \sim A A ∼ A (use P = I P = I P = I )
Symmetric: A ∼ B ⟹ B ∼ A A \sim B \implies B \sim A A ∼ B ⟹ B ∼ A (use P − 1 P^{-1} P − 1 )
Transitive: A ∼ B A \sim B A ∼ B and B ∼ C ⟹ A ∼ C B \sim C \implies A \sim C B ∼ C ⟹ A ∼ C
Part II: Diagonalization
2.1 Definition
A matrix A A A is diagonalizable if it is similar to a diagonal matrix:
A = P D P − 1 A = PDP^{-1} A = P D P − 1
where D D D is diagonal.
2.2 Structure of P P P and D D D
If A A A has eigenvalues λ 1 , λ 2 , … , λ n \lambda_1, \lambda_2, \ldots, \lambda_n λ 1 , λ 2 , … , λ n with corresponding eigenvectors v 1 , v 2 , … , v n v_1, v_2, \ldots, v_n v 1 , v 2 , … , v n :
D = ( λ 1 0 ⋯ 0 0 λ 2 ⋯ 0 ⋮ ⋮ ⋱ ⋮ 0 0 ⋯ λ n ) D = \begin{pmatrix} \lambda_1 & 0 & \cdots & 0 \\ 0 & \lambda_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_n \end{pmatrix} D = λ 1 0 ⋮ 0 0 λ 2 ⋮ 0 ⋯ ⋯ ⋱ ⋯ 0 0 ⋮ λ n
P = ( ∣ ∣ ∣ v 1 v 2 ⋯ v n ∣ ∣ ∣ ) P = \begin{pmatrix} | & | & & | \\ v_1 & v_2 & \cdots & v_n \\ | & | & & | \end{pmatrix} P = ∣ v 1 ∣ ∣ v 2 ∣ ⋯ ∣ v n ∣
2.3 When is a Matrix Diagonalizable?
Condition Diagonalizable? n n n distinct eigenvalues✅ Always Symmetric matrix (real) ✅ Always (orthogonally) Repeated eigenvalues with enough eigenvectors ✅ Yes Geometric multiplicity < Algebraic multiplicity ❌ No Nilpotent (except O O O ) ❌ No Defective matrix ❌ No
Key Theorem: A A A is diagonalizable ⟺ \iff ⟺ A A A has n n n linearly independent eigenvectors.
2.4 The Power Formula (Most Important!)
A n = P D n P − 1 \boxed{A^n = PD^nP^{-1}} A n = P D n P − 1
Since D D D is diagonal:
D n = ( λ 1 n 0 ⋯ 0 0 λ 2 n ⋯ 0 ⋮ ⋮ ⋱ ⋮ 0 0 ⋯ λ n n ) D^n = \begin{pmatrix} \lambda_1^n & 0 & \cdots & 0 \\ 0 & \lambda_2^n & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_n^n \end{pmatrix} D n = λ 1 n 0 ⋮ 0 0 λ 2 n ⋮ 0 ⋯ ⋯ ⋱ ⋯ 0 0 ⋮ λ n n
This converts matrix exponentiation to scalar exponentiation!
2.5 Other Functions via Diagonalization
For any function f f f defined on eigenvalues:
f ( A ) = P f ( D ) P − 1 = P ( f ( λ 1 ) f ( λ 2 ) ⋱ ) P − 1 f(A) = Pf(D)P^{-1} = P\begin{pmatrix} f(\lambda_1) & & \\ & f(\lambda_2) & \\ & & \ddots \end{pmatrix}P^{-1} f ( A ) = P f ( D ) P − 1 = P f ( λ 1 ) f ( λ 2 ) ⋱ P − 1
This works for:
A n A^n A n (polynomial)
e A e^A e A (matrix exponential)
A \sqrt{A} A (if eigenvalues positive)
A − 1 A^{-1} A − 1 (if no zero eigenvalue)
Part III: Step-by-Step Diagonalization
3.1 Complete Algorithm
3.2 Worked Example
Find A 10 A^{10} A 10 where A = ( 4 1 2 3 ) A = \begin{pmatrix} 4 & 1 \\ 2 & 3 \end{pmatrix} A = ( 4 2 1 3 )
Step 1-2: Eigenvalues
∣ A − λ I ∣ = ∣ 4 − λ 1 2 3 − λ ∣ = ( 4 − λ ) ( 3 − λ ) − 2 = λ 2 − 7 λ + 10 = 0 |A - \lambda I| = \begin{vmatrix} 4-\lambda & 1 \\ 2 & 3-\lambda \end{vmatrix} = (4-\lambda)(3-\lambda) - 2 = \lambda^2 - 7\lambda + 10 = 0 ∣ A − λ I ∣ = 4 − λ 2 1 3 − λ = ( 4 − λ ) ( 3 − λ ) − 2 = λ 2 − 7 λ + 10 = 0
( λ − 5 ) ( λ − 2 ) = 0 ⟹ λ 1 = 5 , λ 2 = 2 (\lambda - 5)(\lambda - 2) = 0 \implies \lambda_1 = 5, \lambda_2 = 2 ( λ − 5 ) ( λ − 2 ) = 0 ⟹ λ 1 = 5 , λ 2 = 2
Step 3: Eigenvectors
For λ 1 = 5 \lambda_1 = 5 λ 1 = 5 :
( A − 5 I ) v = 0 ⟹ ( − 1 1 2 − 2 ) ( x y ) = 0 (A - 5I)v = 0 \implies \begin{pmatrix} -1 & 1 \\ 2 & -2 \end{pmatrix}\begin{pmatrix} x \\ y \end{pmatrix} = 0 ( A − 5 I ) v = 0 ⟹ ( − 1 2 1 − 2 ) ( x y ) = 0
− x + y = 0 ⟹ v 1 = ( 1 1 ) -x + y = 0 \implies v_1 = \begin{pmatrix} 1 \\ 1 \end{pmatrix} − x + y = 0 ⟹ v 1 = ( 1 1 )
For λ 2 = 2 \lambda_2 = 2 λ 2 = 2 :
( A − 2 I ) v = 0 ⟹ ( 2 1 2 1 ) ( x y ) = 0 (A - 2I)v = 0 \implies \begin{pmatrix} 2 & 1 \\ 2 & 1 \end{pmatrix}\begin{pmatrix} x \\ y \end{pmatrix} = 0 ( A − 2 I ) v = 0 ⟹ ( 2 2 1 1 ) ( x y ) = 0
2 x + y = 0 ⟹ v 2 = ( 1 − 2 ) 2x + y = 0 \implies v_2 = \begin{pmatrix} 1 \\ -2 \end{pmatrix} 2 x + y = 0 ⟹ v 2 = ( 1 − 2 )
Step 4-5: Form P P P and D D D
P = ( 1 1 1 − 2 ) , D = ( 5 0 0 2 ) P = \begin{pmatrix} 1 & 1 \\ 1 & -2 \end{pmatrix}, \quad D = \begin{pmatrix} 5 & 0 \\ 0 & 2 \end{pmatrix} P = ( 1 1 1 − 2 ) , D = ( 5 0 0 2 )
Step 6: Find P − 1 P^{-1} P − 1
P − 1 = 1 − 3 ( − 2 − 1 − 1 1 ) = ( 2 3 1 3 1 3 − 1 3 ) P^{-1} = \frac{1}{-3}\begin{pmatrix} -2 & -1 \\ -1 & 1 \end{pmatrix} = \begin{pmatrix} \frac{2}{3} & \frac{1}{3} \\ \frac{1}{3} & -\frac{1}{3} \end{pmatrix} P − 1 = − 3 1 ( − 2 − 1 − 1 1 ) = ( 3 2 3 1 3 1 − 3 1 )
Calculate A 10 A^{10} A 10 :
D 10 = ( 5 10 0 0 2 10 ) = ( 9765625 0 0 1024 ) D^{10} = \begin{pmatrix} 5^{10} & 0 \\ 0 & 2^{10} \end{pmatrix} = \begin{pmatrix} 9765625 & 0 \\ 0 & 1024 \end{pmatrix} D 10 = ( 5 10 0 0 2 10 ) = ( 9765625 0 0 1024 )
A 10 = P D 10 P − 1 = ( 1 1 1 − 2 ) ( 9765625 0 0 1024 ) ( 2 3 1 3 1 3 − 1 3 ) A^{10} = PD^{10}P^{-1} = \begin{pmatrix} 1 & 1 \\ 1 & -2 \end{pmatrix}\begin{pmatrix} 9765625 & 0 \\ 0 & 1024 \end{pmatrix}\begin{pmatrix} \frac{2}{3} & \frac{1}{3} \\ \frac{1}{3} & -\frac{1}{3} \end{pmatrix} A 10 = P D 10 P − 1 = ( 1 1 1 − 2 ) ( 9765625 0 0 1024 ) ( 3 2 3 1 3 1 − 3 1 )
After multiplication:
A 10 = 1 3 ( 2 ⋅ 5 10 + 2 10 5 10 − 2 10 2 ( 5 10 − 2 10 ) 5 10 + 2 11 ) A^{10} = \frac{1}{3}\begin{pmatrix} 2 \cdot 5^{10} + 2^{10} & 5^{10} - 2^{10} \\ 2(5^{10} - 2^{10}) & 5^{10} + 2^{11} \end{pmatrix} A 10 = 3 1 ( 2 ⋅ 5 10 + 2 10 2 ( 5 10 − 2 10 ) 5 10 − 2 10 5 10 + 2 11 )
Part IV: Special Cases & Shortcuts
4.1 2×2 Matrix Quick Formula
For A = ( a b c d ) A = \begin{pmatrix} a & b \\ c & d \end{pmatrix} A = ( a c b d ) with distinct eigenvalues λ 1 , λ 2 \lambda_1, \lambda_2 λ 1 , λ 2 :
A n = λ 1 n − λ 2 n λ 1 − λ 2 A − λ 1 λ 2 ( λ 1 n − 1 − λ 2 n − 1 ) λ 1 − λ 2 I A^n = \frac{\lambda_1^n - \lambda_2^n}{\lambda_1 - \lambda_2}A - \frac{\lambda_1\lambda_2(\lambda_1^{n-1} - \lambda_2^{n-1})}{\lambda_1 - \lambda_2}I A n = λ 1 − λ 2 λ 1 n − λ 2 n A − λ 1 − λ 2 λ 1 λ 2 ( λ 1 n − 1 − λ 2 n − 1 ) I
Or using Cayley-Hamilton:
A n = α n A + β n I A^n = \alpha_n A + \beta_n I A n = α n A + β n I
where α n \alpha_n α n and β n \beta_n β n satisfy a recurrence.
4.2 When Eigenvalues are 0 and Non-zero
If λ 1 = k ≠ 0 \lambda_1 = k \neq 0 λ 1 = k = 0 and λ 2 = 0 \lambda_2 = 0 λ 2 = 0 :
A n = k n k A = k n − 1 A A^n = \frac{k^n}{k}A = k^{n-1}A A n = k k n A = k n − 1 A
4.3 Symmetric Matrices (Orthogonal Diagonalization)
For symmetric A A A :
All eigenvalues are real
Eigenvectors are orthogonal
P P P can be chosen orthogonal: P − 1 = P T P^{-1} = P^T P − 1 = P T
A = P D P T A = PDP^T A = P D P T
4.4 Idempotent Matrix Eigenvalues
If A 2 = A A^2 = A A 2 = A , eigenvalues satisfy λ 2 = λ \lambda^2 = \lambda λ 2 = λ , so λ ∈ { 0 , 1 } \lambda \in \{0, 1\} λ ∈ { 0 , 1 } .
4.5 Involutory Matrix Eigenvalues
If A 2 = I A^2 = I A 2 = I , eigenvalues satisfy λ 2 = 1 \lambda^2 = 1 λ 2 = 1 , so λ ∈ { − 1 , 1 } \lambda \in \{-1, 1\} λ ∈ { − 1 , 1 } .
Part V: Non-Diagonalizable Matrices
5.1 When Diagonalization Fails
A = ( 2 1 0 2 ) A = \begin{pmatrix} 2 & 1 \\ 0 & 2 \end{pmatrix} A = ( 2 0 1 2 )
Eigenvalue: λ = 2 \lambda = 2 λ = 2 (repeated)
Eigenvector equation: ( A − 2 I ) v = 0 ⟹ ( 0 1 0 0 ) v = 0 (A - 2I)v = 0 \implies \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}v = 0 ( A − 2 I ) v = 0 ⟹ ( 0 0 1 0 ) v = 0
Only one independent eigenvector: v = ( 1 0 ) v = \begin{pmatrix} 1 \\ 0 \end{pmatrix} v = ( 1 0 )
Not enough eigenvectors → Not diagonalizable!
5.2 Jordan Form (Alternative)
Non-diagonalizable matrices can be written in Jordan form:
A = P J P − 1 A = PJP^{-1} A = P J P − 1
where J J J has eigenvalues on diagonal and some 1s on superdiagonal.
For A = ( 2 1 0 2 ) A = \begin{pmatrix} 2 & 1 \\ 0 & 2 \end{pmatrix} A = ( 2 0 1 2 ) , A A A is already in Jordan form!
A n = ( 2 n n ⋅ 2 n − 1 0 2 n ) A^n = \begin{pmatrix} 2^n & n \cdot 2^{n-1} \\ 0 & 2^n \end{pmatrix} A n = ( 2 n 0 n ⋅ 2 n − 1 2 n )
5.3 Nilpotent + Scalar Pattern
If A = λ I + N A = \lambda I + N A = λ I + N where N N N is nilpotent:
A n = ∑ k = 0 m − 1 ( n k ) λ n − k N k A^n = \sum_{k=0}^{m-1} \binom{n}{k}\lambda^{n-k}N^k A n = ∑ k = 0 m − 1 ( k n ) λ n − k N k
where m m m is the nilpotency index of N N N .
Part VI: Similarity Transformations in Action
6.1 Simplifying Calculations
If you need to compute f ( A ) f(A) f ( A ) and A = P B P − 1 A = PBP^{-1} A = PB P − 1 :
f ( A ) = P f ( B ) P − 1 f(A) = Pf(B)P^{-1} f ( A ) = P f ( B ) P − 1
Choose B B B to be the simplest similar matrix (diagonal if possible).
6.2 Trace and Determinant Shortcuts
Since similar matrices have the same trace and determinant:
tr ( A ) = tr ( D ) = λ 1 + λ 2 + ⋯ + λ n \text{tr}(A) = \text{tr}(D) = \lambda_1 + \lambda_2 + \cdots + \lambda_n tr ( A ) = tr ( D ) = λ 1 + λ 2 + ⋯ + λ n
∣ A ∣ = ∣ D ∣ = λ 1 ⋅ λ 2 ⋯ λ n |A| = |D| = \lambda_1 \cdot \lambda_2 \cdots \lambda_n ∣ A ∣ = ∣ D ∣ = λ 1 ⋅ λ 2 ⋯ λ n
6.3 Verifying Similarity
To check if A ∼ B A \sim B A ∼ B :
Check if ∣ A ∣ = ∣ B ∣ |A| = |B| ∣ A ∣ = ∣ B ∣ and tr ( A ) = tr ( B ) \text{tr}(A) = \text{tr}(B) tr ( A ) = tr ( B )
Check if characteristic polynomials are equal
(If above pass) Try to find P P P such that A P = P B AP = PB A P = PB
Part VII: JEE Previous Year Questions
PYQ 1: JEE Advanced 2017
Problem: Let A = ( 1 2 3 4 ) A = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} A = ( 1 3 2 4 ) . Find A 10 − 5 A 9 A^{10} - 5A^9 A 10 − 5 A 9 .
Solution:
First, find eigenvalues:
∣ A − λ I ∣ = ( 1 − λ ) ( 4 − λ ) − 6 = λ 2 − 5 λ − 2 = 0 |A - \lambda I| = (1-\lambda)(4-\lambda) - 6 = \lambda^2 - 5\lambda - 2 = 0 ∣ A − λ I ∣ = ( 1 − λ ) ( 4 − λ ) − 6 = λ 2 − 5 λ − 2 = 0
By Cayley-Hamilton: A 2 − 5 A − 2 I = O A^2 - 5A - 2I = O A 2 − 5 A − 2 I = O
A 2 = 5 A + 2 I A^2 = 5A + 2I A 2 = 5 A + 2 I
Now:
A 10 − 5 A 9 = A 9 ( A − 5 I ) A^{10} - 5A^9 = A^9(A - 5I) A 10 − 5 A 9 = A 9 ( A − 5 I )
From A 2 = 5 A + 2 I A^2 = 5A + 2I A 2 = 5 A + 2 I :
A − 5 I = A 2 ⋅ A − 1 − 5 I = ( 5 A + 2 I ) A − 1 − 5 I = 5 I + 2 A − 1 − 5 I = 2 A − 1 A - 5I = A^2 \cdot A^{-1} - 5I = (5A + 2I)A^{-1} - 5I = 5I + 2A^{-1} - 5I = 2A^{-1} A − 5 I = A 2 ⋅ A − 1 − 5 I = ( 5 A + 2 I ) A − 1 − 5 I = 5 I + 2 A − 1 − 5 I = 2 A − 1
So: A 10 − 5 A 9 = A 9 ⋅ 2 A − 1 = 2 A 8 A^{10} - 5A^9 = A^9 \cdot 2A^{-1} = 2A^8 A 10 − 5 A 9 = A 9 ⋅ 2 A − 1 = 2 A 8
Continue reducing:
A 8 = ( A 2 ) 4 = ( 5 A + 2 I ) 4 A^8 = (A^2)^4 = (5A + 2I)^4 A 8 = ( A 2 ) 4 = ( 5 A + 2 I ) 4
This gets complex. Alternative approach using eigenvalues:
λ 2 − 5 λ − 2 = 0 ⟹ λ = 5 ± 33 2 \lambda^2 - 5\lambda - 2 = 0 \implies \lambda = \frac{5 \pm \sqrt{33}}{2} λ 2 − 5 λ − 2 = 0 ⟹ λ = 2 5 ± 33
Let λ 1 = 5 + 33 2 \lambda_1 = \frac{5 + \sqrt{33}}{2} λ 1 = 2 5 + 33 , λ 2 = 5 − 33 2 \lambda_2 = \frac{5 - \sqrt{33}}{2} λ 2 = 2 5 − 33
Note: λ 1 + λ 2 = 5 \lambda_1 + \lambda_2 = 5 λ 1 + λ 2 = 5 , λ 1 λ 2 = − 2 \lambda_1\lambda_2 = -2 λ 1 λ 2 = − 2
For eigenvalues: λ 10 − 5 λ 9 = λ 9 ( λ − 5 ) \lambda^{10} - 5\lambda^9 = \lambda^9(\lambda - 5) λ 10 − 5 λ 9 = λ 9 ( λ − 5 )
Since λ 2 = 5 λ + 2 \lambda^2 = 5\lambda + 2 λ 2 = 5 λ + 2 :
λ − 5 = λ 2 − 2 λ − 5 = λ 2 − 5 λ − 2 λ + − 2 λ = − 2 λ \lambda - 5 = \frac{\lambda^2 - 2}{\lambda} - 5 = \frac{\lambda^2 - 5\lambda - 2}{\lambda} + \frac{-2}{\lambda} = \frac{-2}{\lambda} λ − 5 = λ λ 2 − 2 − 5 = λ λ 2 − 5 λ − 2 + λ − 2 = λ − 2
So λ 9 ( λ − 5 ) = − 2 λ 8 \lambda^9(\lambda - 5) = -2\lambda^8 λ 9 ( λ − 5 ) = − 2 λ 8
Result: A 10 − 5 A 9 = − 2 A 8 ⋅ A A = 2 A 8 A^{10} - 5A^9 = -2A^8 \cdot \frac{A}{A} = 2A^8 A 10 − 5 A 9 = − 2 A 8 ⋅ A A = 2 A 8 ...
After full calculation: A 10 − 5 A 9 = 2 A 8 \boxed{A^{10} - 5A^9 = 2A^8} A 10 − 5 A 9 = 2 A 8
PYQ 2: JEE Main 2020
Problem: Let A A A be a 2 × 2 2 \times 2 2 × 2 matrix with eigenvalues 2 and 5. If tr ( A ) = 7 \text{tr}(A) = 7 tr ( A ) = 7 , find ∣ A ∣ |A| ∣ A ∣ .
Solution:
For a 2 × 2 2 \times 2 2 × 2 matrix:
tr ( A ) = λ 1 + λ 2 = 2 + 5 = 7 \text{tr}(A) = \lambda_1 + \lambda_2 = 2 + 5 = 7 tr ( A ) = λ 1 + λ 2 = 2 + 5 = 7 ✓
∣ A ∣ = λ 1 ⋅ λ 2 = 2 × 5 = 10 |A| = \lambda_1 \cdot \lambda_2 = 2 \times 5 = \boxed{10} ∣ A ∣ = λ 1 ⋅ λ 2 = 2 × 5 = 10
PYQ 3: JEE Main 2021
Problem: If A = ( 2 1 1 2 ) A = \begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix} A = ( 2 1 1 2 ) , find A 50 A^{50} A 50 .
Solution:
Step 1: Eigenvalues
∣ A − λ I ∣ = ( 2 − λ ) 2 − 1 = λ 2 − 4 λ + 3 = ( λ − 3 ) ( λ − 1 ) = 0 |A - \lambda I| = (2-\lambda)^2 - 1 = \lambda^2 - 4\lambda + 3 = (\lambda-3)(\lambda-1) = 0 ∣ A − λ I ∣ = ( 2 − λ ) 2 − 1 = λ 2 − 4 λ + 3 = ( λ − 3 ) ( λ − 1 ) = 0
λ 1 = 3 , λ 2 = 1 \lambda_1 = 3, \quad \lambda_2 = 1 λ 1 = 3 , λ 2 = 1
Step 2: Eigenvectors
For λ = 3 \lambda = 3 λ = 3 : ( A − 3 I ) v = 0 ⟹ ( − 1 1 1 − 1 ) v = 0 ⟹ v 1 = ( 1 1 ) (A - 3I)v = 0 \implies \begin{pmatrix} -1 & 1 \\ 1 & -1 \end{pmatrix}v = 0 \implies v_1 = \begin{pmatrix} 1 \\ 1 \end{pmatrix} ( A − 3 I ) v = 0 ⟹ ( − 1 1 1 − 1 ) v = 0 ⟹ v 1 = ( 1 1 )
For λ = 1 \lambda = 1 λ = 1 : ( A − I ) v = 0 ⟹ ( 1 1 1 1 ) v = 0 ⟹ v 2 = ( 1 − 1 ) (A - I)v = 0 \implies \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}v = 0 \implies v_2 = \begin{pmatrix} 1 \\ -1 \end{pmatrix} ( A − I ) v = 0 ⟹ ( 1 1 1 1 ) v = 0 ⟹ v 2 = ( 1 − 1 )
Step 3: Form matrices
P = ( 1 1 1 − 1 ) , D = ( 3 0 0 1 ) P = \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}, \quad D = \begin{pmatrix} 3 & 0 \\ 0 & 1 \end{pmatrix} P = ( 1 1 1 − 1 ) , D = ( 3 0 0 1 )
P − 1 = 1 − 2 ( − 1 − 1 − 1 1 ) = 1 2 ( 1 1 1 − 1 ) P^{-1} = \frac{1}{-2}\begin{pmatrix} -1 & -1 \\ -1 & 1 \end{pmatrix} = \frac{1}{2}\begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix} P − 1 = − 2 1 ( − 1 − 1 − 1 1 ) = 2 1 ( 1 1 1 − 1 )
Step 4: Calculate A 50 A^{50} A 50
D 50 = ( 3 50 0 0 1 ) D^{50} = \begin{pmatrix} 3^{50} & 0 \\ 0 & 1 \end{pmatrix} D 50 = ( 3 50 0 0 1 )
A 50 = P D 50 P − 1 = ( 1 1 1 − 1 ) ( 3 50 0 0 1 ) ⋅ 1 2 ( 1 1 1 − 1 ) A^{50} = PD^{50}P^{-1} = \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}\begin{pmatrix} 3^{50} & 0 \\ 0 & 1 \end{pmatrix} \cdot \frac{1}{2}\begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix} A 50 = P D 50 P − 1 = ( 1 1 1 − 1 ) ( 3 50 0 0 1 ) ⋅ 2 1 ( 1 1 1 − 1 )
= 1 2 ( 3 50 1 3 50 − 1 ) ( 1 1 1 − 1 ) = \frac{1}{2}\begin{pmatrix} 3^{50} & 1 \\ 3^{50} & -1 \end{pmatrix}\begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix} = 2 1 ( 3 50 3 50 1 − 1 ) ( 1 1 1 − 1 )
= 1 2 ( 3 50 + 1 3 50 − 1 3 50 − 1 3 50 + 1 ) = \frac{1}{2}\begin{pmatrix} 3^{50}+1 & 3^{50}-1 \\ 3^{50}-1 & 3^{50}+1 \end{pmatrix} = 2 1 ( 3 50 + 1 3 50 − 1 3 50 − 1 3 50 + 1 )
A 50 = 1 2 ( 3 50 + 1 3 50 − 1 3 50 − 1 3 50 + 1 ) A^{50} = \boxed{\frac{1}{2}\begin{pmatrix} 3^{50}+1 & 3^{50}-1 \\ 3^{50}-1 & 3^{50}+1 \end{pmatrix}} A 50 = 2 1 ( 3 50 + 1 3 50 − 1 3 50 − 1 3 50 + 1 )
PYQ 4: JEE Advanced 2019
Problem: Let A = ( 0 1 0 0 0 1 1 0 0 ) A = \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{pmatrix} A = 0 0 1 1 0 0 0 1 0 . Find A 99 A^{99} A 99 .
Solution:
Observation: This is a cyclic permutation matrix!
Check powers:
A 2 = ( 0 0 1 1 0 0 0 1 0 ) , A 3 = ( 1 0 0 0 1 0 0 0 1 ) = I A^2 = \begin{pmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix}, \quad A^3 = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} = I A 2 = 0 1 0 0 0 1 1 0 0 , A 3 = 1 0 0 0 1 0 0 0 1 = I
So A A A has period 3!
99 = 3 × 33 ⟹ A 99 = ( A 3 ) 33 = I 33 = I 99 = 3 \times 33 \implies A^{99} = (A^3)^{33} = I^{33} = \boxed{I} 99 = 3 × 33 ⟹ A 99 = ( A 3 ) 33 = I 33 = I
Alternative via eigenvalues:
Eigenvalues are cube roots of unity: 1 , ω , ω 2 1, \omega, \omega^2 1 , ω , ω 2
λ 99 = ( λ 3 ) 33 = 1 \lambda^{99} = (\lambda^3)^{33} = 1 λ 99 = ( λ 3 ) 33 = 1 for all three eigenvalues, confirming A 99 = I A^{99} = I A 99 = I .
PYQ 5: JEE Main 2022
Problem: If A A A and B B B are similar matrices and ∣ A ∣ = 5 |A| = 5 ∣ A ∣ = 5 , tr ( B ) = 10 \text{tr}(B) = 10 tr ( B ) = 10 , then:
(A) ∣ B ∣ = 5 |B| = 5 ∣ B ∣ = 5
(B) tr ( A ) = 10 \text{tr}(A) = 10 tr ( A ) = 10
(C) Both (A) and (B)
(D) Neither
Solution:
Similar matrices have:
Same determinant: ∣ A ∣ = ∣ B ∣ = 5 |A| = |B| = 5 ∣ A ∣ = ∣ B ∣ = 5 ✓
Same trace: tr ( A ) = tr ( B ) = 10 \text{tr}(A) = \text{tr}(B) = 10 tr ( A ) = tr ( B ) = 10 ✓
Answer: (C) Both (A) and (B) \boxed{\text{(C) Both (A) and (B)}} (C) Both (A) and (B)
PYQ 6: JEE Main 2019
Problem: If A = ( 1 1 0 1 ) A = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} A = ( 1 0 1 1 ) , find A n A^n A n .
Solution:
This matrix is NOT diagonalizable (repeated eigenvalue λ = 1 \lambda = 1 λ = 1 , only one eigenvector).
Write A = I + N A = I + N A = I + N where N = ( 0 1 0 0 ) N = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} N = ( 0 0 1 0 )
N 2 = O N^2 = O N 2 = O , so N N N is nilpotent with index 2.
A n = ( I + N ) n = I + n N = ( 1 0 0 1 ) + n ( 0 1 0 0 ) = ( 1 n 0 1 ) A^n = (I + N)^n = I + nN = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} + n\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} = \boxed{\begin{pmatrix} 1 & n \\ 0 & 1 \end{pmatrix}} A n = ( I + N ) n = I + n N = ( 1 0 0 1 ) + n ( 0 0 1 0 ) = ( 1 0 n 1 )
PYQ 7: JEE Advanced 2015
Problem: Let A = ( 5 − 3 6 − 4 ) A = \begin{pmatrix} 5 & -3 \\ 6 & -4 \end{pmatrix} A = ( 5 6 − 3 − 4 ) . Find A 100 A^{100} A 100 .
Solution:
Eigenvalues:
∣ A − λ I ∣ = ( 5 − λ ) ( − 4 − λ ) + 18 = λ 2 − λ − 2 = ( λ − 2 ) ( λ + 1 ) = 0 |A - \lambda I| = (5-\lambda)(-4-\lambda) + 18 = \lambda^2 - \lambda - 2 = (\lambda-2)(\lambda+1) = 0 ∣ A − λ I ∣ = ( 5 − λ ) ( − 4 − λ ) + 18 = λ 2 − λ − 2 = ( λ − 2 ) ( λ + 1 ) = 0
λ 1 = 2 , λ 2 = − 1 \lambda_1 = 2, \quad \lambda_2 = -1 λ 1 = 2 , λ 2 = − 1
Eigenvectors:
For λ = 2 \lambda = 2 λ = 2 : ( 3 − 3 6 − 6 ) v = 0 ⟹ v 1 = ( 1 1 ) \begin{pmatrix} 3 & -3 \\ 6 & -6 \end{pmatrix}v = 0 \implies v_1 = \begin{pmatrix} 1 \\ 1 \end{pmatrix} ( 3 6 − 3 − 6 ) v = 0 ⟹ v 1 = ( 1 1 )
For λ = − 1 \lambda = -1 λ = − 1 : ( 6 − 3 6 − 3 ) v = 0 ⟹ v 2 = ( 1 2 ) \begin{pmatrix} 6 & -3 \\ 6 & -3 \end{pmatrix}v = 0 \implies v_2 = \begin{pmatrix} 1 \\ 2 \end{pmatrix} ( 6 6 − 3 − 3 ) v = 0 ⟹ v 2 = ( 1 2 )
Matrices:
P = ( 1 1 1 2 ) , P − 1 = ( 2 − 1 − 1 1 ) , D = ( 2 0 0 − 1 ) P = \begin{pmatrix} 1 & 1 \\ 1 & 2 \end{pmatrix}, \quad P^{-1} = \begin{pmatrix} 2 & -1 \\ -1 & 1 \end{pmatrix}, \quad D = \begin{pmatrix} 2 & 0 \\ 0 & -1 \end{pmatrix} P = ( 1 1 1 2 ) , P − 1 = ( 2 − 1 − 1 1 ) , D = ( 2 0 0 − 1 )
Calculate:
D 100 = ( 2 100 0 0 1 ) D^{100} = \begin{pmatrix} 2^{100} & 0 \\ 0 & 1 \end{pmatrix} D 100 = ( 2 100 0 0 1 )
A 100 = P D 100 P − 1 = ( 1 1 1 2 ) ( 2 100 0 0 1 ) ( 2 − 1 − 1 1 ) A^{100} = PD^{100}P^{-1} = \begin{pmatrix} 1 & 1 \\ 1 & 2 \end{pmatrix}\begin{pmatrix} 2^{100} & 0 \\ 0 & 1 \end{pmatrix}\begin{pmatrix} 2 & -1 \\ -1 & 1 \end{pmatrix} A 100 = P D 100 P − 1 = ( 1 1 1 2 ) ( 2 100 0 0 1 ) ( 2 − 1 − 1 1 )
= ( 2 100 1 2 100 2 ) ( 2 − 1 − 1 1 ) = \begin{pmatrix} 2^{100} & 1 \\ 2^{100} & 2 \end{pmatrix}\begin{pmatrix} 2 & -1 \\ -1 & 1 \end{pmatrix} = ( 2 100 2 100 1 2 ) ( 2 − 1 − 1 1 )
= ( 2 101 − 1 − 2 100 + 1 2 101 − 2 − 2 100 + 2 ) = \begin{pmatrix} 2^{101} - 1 & -2^{100} + 1 \\ 2^{101} - 2 & -2^{100} + 2 \end{pmatrix} = ( 2 101 − 1 2 101 − 2 − 2 100 + 1 − 2 100 + 2 )
A 100 = ( 2 101 − 1 1 − 2 100 2 101 − 2 2 − 2 100 ) A^{100} = \boxed{\begin{pmatrix} 2^{101} - 1 & 1 - 2^{100} \\ 2^{101} - 2 & 2 - 2^{100} \end{pmatrix}} A 100 = ( 2 101 − 1 2 101 − 2 1 − 2 100 2 − 2 100 )
Part VIII: Quick Reference
Diagonalization Checklist
Properties Preserved Under Similarity
Preserved Not Preserved Determinant Individual entries Trace Symmetry Eigenvalues Positive definiteness Characteristic polynomial Sparsity pattern Rank, Nullity Orthogonality Minimal polynomial
Common Patterns
Matrix Type Eigenvalues Diagonalizable? Symmetric (real) All real ✅ Yes (orthogonally) Idempotent 0, 1 only ✅ Yes Involutory ±1 only ✅ Yes Nilpotent All 0 ❌ No (unless O O O ) Rotation e ± i θ e^{\pm i\theta} e ± i θ ✅ Yes (over C \mathbb{C} C ) Upper triangular Diagonal entries Maybe
Conclusion
Similar matrices and diagonalization are powerful tools because:
Powers become trivial: A n = P D n P − 1 A^n = PD^nP^{-1} A n = P D n P − 1 reduces to scalar powers
Properties transfer: Trace, determinant, eigenvalues are similarity invariants
Functions extend: Any function of a matrix can be computed via eigenvalues
Recognition saves time: Spot special matrices (symmetric, idempotent, etc.)
JEE Strategy:
Check if matrix is diagonalizable (distinct eigenvalues = always yes)
For 2×2, diagonalization is usually faster than Cayley-Hamilton for large powers
For non-diagonalizable matrices, use A = λ I + N A = \lambda I + N A = λ I + N with nilpotent N N N
Last updated: January 2026 | Essential for JEE Main & Advanced