Information Retrieval - ad-teaching.infor...
Information Retrieval - ad-teaching.infor... Information Retrieval - ad-teaching.infor...
Latent Semantic Indexing 4/9• Definition of Latent Semantic Indexing (LSI)– Given an m x n term-document matrix A– And a rank k, typically
Latent Semantic Indexing 5/9• Eigenvector decomposition (EVD)– For an m x m matrix A, and an m x 1 vector xwe say that x is an eigenvector of A if A · x = λ · xλ is called an Eigenvalue of A– If A is symmetric, A has m linear independenteigenvectors, which hence form a basis of the R m– Then A can be written as U · D · U Twhere D is diagonal, containing the Eigenvaluesand U is unitarian, that is, U · U T = U T · U = I– This is called the Eigenvector decomposition of Asometimes also called Schur decomposition13
- Page 1 and 2: Information RetrievalWS 2012 / 2013
- Page 3 and 4: Experiences with ES#7 (cookies, UTF
- Page 5 and 6: Synonyms 1/4• Problem: another so
- Page 7 and 8: Synonyms 3/4• Solution 2: Track u
- Page 9 and 10: Latent Semantic Indexing 1/9• An
- Page 11: Latent Semantic Indexing 3/9• If
- Page 17: Latent Semantic Indexing 8/9• Wit
- Page 20 and 21: Octave 2/5• Use the Octave shell
- Page 22 and 23: Octave 4/5• Sparse matrices- Our
- Page 24: References• Further reading- Text
Latent Semantic Indexing 5/9• Eigenvector decomposition (EVD)– For an m x m matrix A, and an m x 1 vector xwe say that x is an eigenvector of A if A · x = λ · xλ is called an Eigenvalue of A– If A is symmetric, A has m linear independenteigenvectors, which hence form a basis of the R m– Then A can be written as U · D · U Twhere D is diagonal, containing the Eigenvaluesand U is unitarian, that is, U · U T = U T · U = I– This is called the Eigenvector decomposition of Asometimes also called Schur decomposition13