09.01.2015 Views

Lecture I

Lecture I

Lecture I

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Physics 501<br />

Math Models of Physics Problems<br />

Wednesday, September 4, 13<br />

Luis Anchordoqui<br />

1


Bibliography<br />

L. A. Anchordoqui and T. C. Paul,<br />

``Mathematical Models of Physics Problems’‘<br />

(Nova Publishers, 2013)<br />

G. F. D. Duff and D. Naylor,<br />

``Differential Equations of Applied Mathematics’‘<br />

(John Wiley & Sons, 1966)<br />

G. B. Arfken, H. J. Weber, and F. E. Harris<br />

``Mathematical Methods for Physicists’’ (7th Edition)<br />

(Academic Press, 2012)<br />

Wednesday, September 4, 13<br />

2


Provisional Course Outline<br />

(Please note this may be revised during the course<br />

to match coverage of material during lectures, etc.)<br />

1st week - Elements of Linear Algebra<br />

2nd week - Analytic Functions<br />

3rd week - Integration in the Complex Plane<br />

4th week - Isolated Singularities and Residues<br />

5th week - Initial Value Problem (Picard’s Theorem)<br />

6th week -<br />

Initial Value Problem (Green Matrix)<br />

7th week - Boundary Value Problem (Sturm-Liouville Operator)<br />

8th week - Midterm-exam (October 23)<br />

9th week -<br />

Boundary Value Problem (Special Functions)<br />

10th week - Fourier Series and Fourier Transform<br />

11th week -<br />

Hyperbolic Partial Differential Equation (Wave equation)<br />

12th week - Parabolic Partial Differential Equation (Diffusion equation)<br />

13th week - Elliptic Partial Differential Equation (Laplace equation)<br />

14th week - Midterm-exam (December 11)<br />

Wednesday, September 4, 13<br />

3


Elements of Linear Algebra<br />

1.1 Linear Spaces<br />

1.2 Matrices and Linear Transformations<br />

Wednesday, September 4, 13<br />

4


Definition 1.1.1.<br />

field F<br />

+<br />

for which all axioms below hold 8 , µ, ⌫ 2 F :<br />

(i) closure + µ · µ<br />

A is a set together with two operations and<br />

sum and product again belong to F<br />

(ii) associative law +(µ + ⌫) =( + µ)+⌫ & · (µ · ⌫) =( · µ) · ⌫<br />

(iii) commutative law + ⌫ = ⌫ + & · µ = µ ·<br />

(iv) distributive laws · (µ + ⌫) = · µ + · ⌫<br />

and ( + µ) · ⌫ = · ⌫ + µ · ⌫<br />

(v) existence of an additive identity there exists an element<br />

0 2 F for which +0=<br />

(vi)<br />

Linear Spaces<br />

existence of a multiplicative identity<br />

1 2 F with 1 6= 0 for which 1 · =<br />

(vii) existence of additive inverse to every 2 F<br />

an additive inverse such that + =0<br />

there exists an element<br />

·<br />

there corresponds<br />

(viii) existence of multiplicative inverse to every 2 F<br />

1<br />

there corresponds a multiplicative inverse such that 1 · =1<br />

Wednesday, September 4, 13<br />

5


Example 1.1.1.<br />

Underlying every linear space<br />

is a field<br />

F<br />

examples are R<br />

and<br />

C<br />

Definition 1.1.2.<br />

A linear space<br />

V<br />

is a collection of objects<br />

with a (vector) addition and scalar multiplication defined<br />

which is closed under both operations<br />

Such a vector space satisfies following axioms:<br />

➢ commutative law of vector addition<br />

➢ associative law of vector addition<br />

➢ There exists a zero vector<br />

x + y = y + x, 8 x, y 2 V<br />

x +(y + w) =(x + y)+w, 8 x, y, w 2 V<br />

0<br />

such that<br />

x + 0 = x, 8 x 2 V<br />

Wednesday, September 4, 13<br />

6


➢ To every element<br />

x 2 V<br />

there corresponds an inverse element<br />

x<br />

such that<br />

x +( x) =0<br />

➢ associative law of scalar multiplication<br />

( µ) x = (µ x), 8 x 2 V and , µ 2 F<br />

➢ distributive laws of scalar multiplication<br />

( + µ) x = x + µ x, 8 x 2 V and , µ 2 F<br />

(x + y) = x + y, 8 x, y 2 V and 2 F<br />

➢<br />

1 · x = x, 8 x 2 V<br />

Wednesday, September 4, 13<br />

7


Example 1.1.2.<br />

Cartesian space<br />

R n<br />

is prototypical example<br />

of real<br />

n<br />

-dimensional vector space<br />

Let x =(x 1 ,...,x n ) be an ordered n tuple of real numbers x i<br />

to which there corresponds a point<br />

x<br />

with these Cartesian<br />

coordinates and a vector<br />

x<br />

with these components<br />

We define addition of vectors by component addition<br />

x + y =(x 1 + y 1 ,...,x n + y n )<br />

(1.1.1.)<br />

and scalar multiplication by component multiplication<br />

x =( x 1 ,..., x n )<br />

(1.1.2.)<br />

Wednesday, September 4, 13<br />

8


Definition 1.1.3.<br />

Given a vector space V over a field F<br />

W<br />

if<br />

W<br />

is vector space over F<br />

Corollary 1.1.1<br />

V<br />

a subset of is called subspace<br />

under operations already<br />

A subset W of a vector space V is a subspace of V ,<br />

defined on<br />

(i)W is nonempty (ii) if x, y 2 W ☛ then x + y 2 W<br />

(iii) x 2 W and 2 F ☛ then · x 2 W<br />

V<br />

After defining notions of vector spaces and subspaces<br />

next step is to identify functions that can be used<br />

to relate one<br />

vector space to another<br />

Functions should respect algebraic structure of vector spaces<br />

so we require they preserve addition<br />

and scalar multiplication<br />

Wednesday, September 4, 13<br />

9


Definition 1.1.4.<br />

Let V and W be vector spaces over field F<br />

A linear transformation from V<br />

to<br />

W<br />

is a function T : V ! W<br />

such that T ( x + µy) = T (x)+µT (y) (1.1.3.)<br />

for all vectors x, y 2 V and all scalars , µ 2 F<br />

If a linear transformation is one-to-one and onto<br />

it is called vector space isomorphism ☛ or simply isomorphism<br />

Definition 1.1.5.<br />

S = x 1 , ··· , x n<br />

Let be a set of vectors in vector space over field<br />

Any vector of form y =<br />

nX<br />

i=1<br />

is called linear<br />

ix i for i 2 F<br />

combination of vectors in<br />

V<br />

S<br />

F<br />

S<br />

Set is said to span if each element of<br />

Wednesday, September 4, 13<br />

V<br />

can be expressed as linear combination of vectors in<br />

V<br />

S<br />

10


Definition 1.1.6.<br />

Let be given vectors<br />

and<br />

m<br />

1,... m an equal<br />

number of scalars<br />

x 1 ,...,x m<br />

1x 1 + ···+ k x k + ···+ m x m<br />

We can form a linear combination or sum<br />

(1.1.4.)<br />

which is also an element of the vector space<br />

Suppose there exist values 1 ... n which are not all zero<br />

such that above vector sum is the zero vector<br />

Then the vectors x 1 ,...,x m<br />

are said to be linearly dependent<br />

Contrarily vectors x 1 ,...,x m are called linearly independent<br />

if<br />

1x 1 + ···+ k x k + ···+ m x m = 0<br />

(1.1.5.)<br />

demands scalars<br />

k must all be zero<br />

Wednesday, September 4, 13<br />

11


Definition 1.1.7<br />

Dimension of<br />

V<br />

Definition 1.1.8.<br />

V<br />

☛ maximal number of linearly independent vectors of<br />

n<br />

Let be an dimensional vector space<br />

and<br />

S = x 1 ,...,x n ⇢ V<br />

V<br />

(1.1.6.)<br />

a linearly independent spanning set for V<br />

➪<br />

S<br />

is called a basis of<br />

V<br />

Definition 1.1.9.<br />

Let<br />

S<br />

be a nonempty subset of vector space<br />

V<br />

➪ S is a basis for V if and only if each vector in V<br />

can be written uniquely as a linear combination of vectors in<br />

Wednesday, September 4, 13<br />

S<br />

12


Definition 1.1.10.<br />

An inner product<br />

(x, y)<br />

h , i : V ⇥ V ! F<br />

ordered pair of elements of<br />

and has following properties:<br />

➢ conjugate symmetry or Hermiticity<br />

➢ linearity in second argument<br />

➢ definiteness<br />

Corollary 1.1.2.<br />

is a function that takes each<br />

V<br />

to a number<br />

hx, yi 2F<br />

hx, yi =(hy, xi) ⇤<br />

hx, y + wi = hx, yi + hx, wi and hx, yi = hx, yi<br />

hx, xi =0, x = 0<br />

Conjugate symmetry and linearity in second variable gives<br />

h x, yi =(hy, xi) ⇤ = ⇤ (hy, xi) ⇤ = ⇤ (hx, yi)<br />

hy + w, xi =(hx, y + wi) ⇤ =(hx, yi) ⇤ +(hx, wi) ⇤ = hy, xi + hw, xi<br />

Remark 1.1.1.<br />

In R inner product is symmetric<br />

whereas in<br />

C<br />

is a sesquilinear form<br />

(i.e. is linear in one argument and conjugate-linear in other)<br />

Wednesday, September 4, 13<br />

13


Definition 1.1.11.<br />

An inner product h , i is said to be positive definite ,<br />

for all non-zero x in V,hx, xi 0<br />

A positive definite inner product is usually referred to as<br />

Definition 1.1.12.<br />

An inner product space is a vector space V<br />

equipped with an inner product h , i : V ⇥ V ! F<br />

Definition 1.1.13.<br />

V<br />

F<br />

over field<br />

Vector space on endowed with a positive definite inner<br />

product (a.k.a. scalar product) defines Euclidean space E<br />

Example 1.1.3.<br />

For<br />

x, y 2 R n<br />

Example 1.1.4.<br />

nX<br />

hx, yi = x · y = x k y k<br />

For x, y 2 C n hx, yi = x · y =<br />

k=1<br />

(1.1.7.)<br />

nX<br />

x ⇤ ky k (1.1.8.)<br />

k=1<br />

genuine inner product<br />

Wednesday, September 4, 13<br />

14<br />

F


Example 1.1.5.<br />

C([a, b] )<br />

x(t)<br />

Let<br />

denote set of continuous functions<br />

defined on closed interval 1


Definition 1.1.14.<br />

Axiom of positivity allows one to define a norm or length<br />

For each vector of an euclidean space<br />

kxk =+ p hx, xi<br />

In particular kxk =0, x = 0<br />

Further ☛ if<br />

2 C<br />

k xk = p | | 2 hx, xi = | |kxk<br />

This allows a normalization for any non-zero length vector<br />

Indeed ☛ if x 6= 0 then kxk > 0<br />

Thus ☛ we can take 2 Csuch that | | = kxk 1 y = x<br />

and<br />

It follows that kyk = | |kxk =1<br />

Example 1.1.6.<br />

Length of a vector<br />

Example 1.1.7.<br />

x 2 R n is<br />

kxk =<br />

Length of a vector x 2C 2 ([a, b]) is<br />

|<br />

|xk =<br />

(1.1.12.)<br />

(1.1.13.)<br />

1/2 nX<br />

xk! 2 (1.1.14.)<br />

k=1<br />

( Z b<br />

|x(t)| 2 dt<br />

a<br />

) 1/2<br />

(1.1.15.)<br />

Wednesday, September 4, 13<br />

16


Definition 1.1.15.<br />

In a real Euclidean space angle between vectors<br />

x<br />

and<br />

y<br />

cos cxy =<br />

|hx, yi|<br />

kxkkyk<br />

(1.1.16.)<br />

Definition 1.1.16.<br />

Two vectors are orthogonal x y if hx, yi =0<br />

Zero vector is orthogonal to every vector in E<br />

Definition 1.1.17.<br />

In a real Euclidean space<br />

angle between two orthogonal non-zero vectors is<br />

⇡/2<br />

⤶<br />

i.e. cos cxy =0<br />

Wednesday, September 4, 13<br />

17


Lemma 1.1.1.<br />

If<br />

{x 1 , x 2 , ··· , x k } is a set of mutually orthogonal non-zero vectors<br />

Proof.<br />

Assume that vectors are linearly dependent<br />

Corollary 1.1.3.<br />

k<br />

then its elements are linearly independent<br />

Then ☛ there exists numbers i (not all zero) such that<br />

1x 1 + 2 x 2 + ···+ k x k = 0<br />

1 6= 0<br />

(1.1.17.)<br />

Further ☛ assume that and consider scalar product<br />

of the linear combination (1.1.17) with vector x 1<br />

Since<br />

x i x j<br />

or equivalently<br />

for i 6= j ☛ we have<br />

which contradicts hypothesis<br />

1hx 1 , x 1 i = hx 1 , 0i<br />

1kx 1 k 2 =0) x 1 = 0<br />

If a sum of mutually orthogonal vectors is<br />

Wednesday, September 4, 13<br />

0<br />

(1.1.18.)<br />

(1.1.19.)<br />

then each vector must be<br />

0<br />

18


Definition 1.1.18.<br />

A basis<br />

x 1 ,...,x n of<br />

V<br />

is called orthogonal<br />

if<br />

hx i , x j i =0<br />

for all<br />

i 6= j<br />

☛ basis is called orthonormal<br />

Example 1.1.8.<br />

kx i k =1, 8i =1,...,n<br />

unit length<br />

Simplest example of an orthonormal basis is standard basis<br />

e 1 =<br />

0<br />

B<br />

@<br />

1<br />

0<br />

0<br />

.<br />

0<br />

0<br />

1<br />

, e 2 =<br />

C<br />

A<br />

if in addition each vector has<br />

0<br />

B<br />

@<br />

0<br />

1<br />

0<br />

.<br />

0<br />

0<br />

1<br />

, ... e n =<br />

C<br />

A<br />

0<br />

B<br />

@<br />

0<br />

0<br />

0<br />

.<br />

0<br />

1<br />

1<br />

C<br />

A<br />

(1.1.20.)<br />

Wednesday, September 4, 13<br />

19


Lemma 1.1.2.<br />

If set of vectors {x 1 , x 2 , ··· , x k }<br />

* +<br />

kX<br />

kX<br />

y, ix i =<br />

i=1<br />

i=1<br />

Theorem 1.1.1. (Pythagorean theorem)<br />

If<br />

then every<br />

y 2E<br />

is orthogonal to<br />

linear combination of this set of vectors<br />

ihy, x i i (1.1.22.)<br />

x y 2E then<br />

kx + yk 2 = hx + y, x + yi = kxk 2 + kyk 2 (1.1.23.)<br />

In any right triangle ☛ area of square whose side is hypotenuse<br />

(side opposite right angle) is equal to sum of areas of squares<br />

whose sides are two legs (two sides that meet at a right angle)<br />

Corollary 1.1.4.<br />

is also orthogonal to<br />

If set of vectors {x 1 , x 2 , ··· , x k } are mutually orthogonal<br />

y<br />

x i x j with i 6= j then<br />

(1.1.24.)<br />

Wednesday, September 4, 13<br />

||x 1 + ···+ x k || 2 = ||x 1 || 2 + ···+ ||x k || 2 20


Corollary 1.1.5. (Triangle inequality)<br />

For<br />

x, y 2E we have<br />

kxk<br />

kyk applekx + yk applekxk + kyk<br />

(1.1.25.)<br />

Length of a side of a triangle<br />

does not exceed sum of lengths of other two sides<br />

nor is it less than absolute value of difference of other two sides<br />

Proof.<br />

Consider scalar product<br />

kx + yk 2 = hx + y, x + yi = kxk 2 +2


Definition 1.1.19.<br />

Let<br />

x =(x 1 ,...,x k ,...)<br />

be an infinite sequence of real numbers<br />

such that<br />

1X<br />

k=1<br />

x 2 k converges<br />

Sequence x defines a point of Hilbert coordinate space<br />

It also defines a vector with<br />

which as in<br />

Addition and scalar multiplication<br />

x<br />

k<br />

with<br />

k<br />

-th component x k<br />

-th coordinate<br />

Norm of Hilbert vector is Pythagorean expression<br />

! 1/2<br />

1X<br />

kxk =<br />

By hypothesis<br />

this series converges if<br />

k=1<br />

x 2 k<br />

R n<br />

R 1<br />

x k<br />

we identify with point<br />

are defined analogously to (1.1.1) and (1.1.2)<br />

x is an element of Hilbert space H = E 1<br />

Wednesday, September 4, 13<br />

22


Definition 1.2.0<br />

Definition 1.2.1.<br />

Linear Operators on Euclidean Spaces<br />

An operator A on E is a vector function A : E!E<br />

Operator is called linear if<br />

A(↵x + y) =↵Ax + Ay, 8x, y 2E and 8↵, 2 C (or R)<br />

A n ⇥ n x<br />

T (x) =Ax<br />

Let be an<br />

matrix and a vector<br />

➪<br />

A vector<br />

the function<br />

Definition 1.2.2.<br />

is a linear operator<br />

x 6= 0 is eigenvector of A if 9 satisfying A x = x<br />

in such a case (A I) x = 0 ☛ I is identity matrix<br />

Eigenvalues are given by relation det (A I) =0<br />

which has<br />

➪<br />

m<br />

different roots with<br />

1 apple m apple n<br />

det(A I) is a polynomial of degree n<br />

Eigenvectors associated with eigenvalue<br />

are obtained by<br />

solving (singular) linear system (A I) x = 0<br />

Wednesday, September 4, 13<br />

23


Remark 1.2.1.<br />

x 1 x 2 a, b constants<br />

ax 1 + bx 2 is an eigenvector with eigenvalue because<br />

If and are eigenvectors with eigenvalue and<br />

➪<br />

A(ax 1 + bx 2 )=aAx 1 + bAx 2 = a x 1 + b x 2 = (ax 1 + bx 2 )<br />

It is straightforward to show that:<br />

(i) eigenvectors associated to a given eigenvalue<br />

form a vector space<br />

(ii) two eigenvectors corresponding to different eigenvalues<br />

are lineraly independent<br />

Definition 1.2.3.<br />

A matrix is said to be diagonable (or diagonalizable)<br />

if the eigenvectors form a base<br />

i.e. if any vector v can be written as a linear combination<br />

of eigenvectors<br />

A matrix<br />

A<br />

A<br />

is said to be diagonable<br />

if there exists n eigenvectors<br />

that are linearly independent<br />

x 1 ,...,x n<br />

Wednesday, September 4, 13<br />

24


In such a case<br />

☛<br />

we can form with<br />

k<br />

n eigenvectors an n ⇥ n matrix U<br />

U<br />

such that -th column of is -th eigenvector<br />

In this way<br />

☛ relations<br />

can be written in a matrix form<br />

U<br />

n Ax k = x k<br />

AU = UA 0 ☛ A 0 is a n ⇥ n<br />

The latter can also be written as<br />

U 1 AU= A 0 , or equivalently A = UA 0 U 1 ,<br />

k<br />

diagonal matrix such that<br />

A 0 ij =<br />

i ij<br />

which bind diagonal matrix with original matrix<br />

( is invertible because eigenvectors are linearly independent)<br />

Definition 1.2.4.<br />

Transformation (1.2.11.) represents a change of base<br />

Note that eigenvalues (and therefore matrix A 0 )<br />

are independent of change of base<br />

n ⇥ n<br />

if B = W 1 AW with W an arbitrary (invertible)<br />

➪ det (B I) =det(W 1 AW W 1 W)=det(A I)<br />

Wednesday, September 4, 13<br />

such that it has same eigenvalues<br />

(1.2.11.)<br />

matrix<br />

(1.2.12.)<br />

25


Definition 1.2.5.<br />

If real function<br />

f(x)<br />

f(x) =<br />

has a Taylor expansion<br />

1X<br />

f (n) (0)<br />

n!<br />

x n (1.2.13.)<br />

n=0<br />

matrix function is defined by substituting argument<br />

powers become matrix powers, additions become matrix sums<br />

and multiplications become scaling operations<br />

If real series converges for |x|


Definition 1.2.6.<br />

A complex square matrix A is Hermitian if A = A †<br />

where A † =(A ⇤ ) T<br />

Remark 1.2.2.<br />

It is easily seen that if<br />

is conjugate transpose of a complex matrix<br />

A<br />

(i) its eigenvalues are real<br />

A partially defined linear operator<br />

is Hermitian then:<br />

(ii) eigenvectors associated to different eingenvalues<br />

are orthogonal<br />

(iii) it has a complete set of orthogonal eigenvectors<br />

which makes it diagonalizable<br />

Definition 1.2.7.<br />

A<br />

on a Hilbert space<br />

is called symmetric if hAx, yi = hx,Ayi, 8 x and y in domain of A<br />

A symmetric everywhere defined operator is called<br />

self-adjoint or Hermitian<br />

H<br />

on this Hilbert space we have ☛<br />

H<br />

C n A<br />

hx, Ayi = hAx, yi, 8 x, y 2 C n<br />

Note that if we take as Hilbert space<br />

with standard dot product<br />

and interpret a Hermitian square matrix as a linear operator<br />

Wednesday, September 4, 13<br />

27


Example 1.2.1<br />

A convenient basis for traceless Hermitian<br />

are Pauli matrices:<br />

1 =<br />

They obey following relations:<br />

(i)<br />

✓ 0 1<br />

1 0<br />

2<br />

i = I<br />

(ii) i j = j i<br />

(iii) i j = i k<br />

(i, j, k)<br />

◆ ✓ 0 i<br />

, 2 =<br />

i 0<br />

◆<br />

, 3 =<br />

☛ a cyclic permutation of (1,2,3)<br />

These three relations can be summarized as<br />

2 ⇥ 2 matrices<br />

✓ 1 0<br />

0 1<br />

◆<br />

(1.1.36.)<br />

☛<br />

✏ ijk<br />

i j = I ij + i✏ ijk k (1.1.37.)<br />

is Levi-Civita symbol<br />

Wednesday, September 4, 13<br />

28


Wednesday, September 4, 13<br />

29

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!