06.02.2013 Views

Tensors: Geometry and Applications J.M. Landsberg - Texas A&M ...

Tensors: Geometry and Applications J.M. Landsberg - Texas A&M ...

Tensors: Geometry and Applications J.M. Landsberg - Texas A&M ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

12 1. Introduction<br />

Figure 1.3.1. temporary picture until get party<br />

tryptophan, phenylalanine <strong>and</strong> DOPA. Each sample, say sample number i, is<br />

successively excited by light at J different wavelengths. For every excitation<br />

wavelength one measures the emitted spectrum. Say the intensity of the<br />

fluorescent light emitted is measured at K different wavelengths. Hence for<br />

every i, one obtains a J × K table of excitation-emission matrices.<br />

Thus the data one is h<strong>and</strong>ed is an I × J × K array. In bases, if {ei}<br />

is a basis of C I , {hj} a basis of C J , <strong>and</strong> {gk} a basis of C K , then T =<br />

�<br />

ijk Tijkei⊗hj⊗gk. A first goal is to determine r such that<br />

T ≃<br />

r�<br />

af⊗bf⊗cf<br />

f=1<br />

where each f represents a substance. Writing af = � ai,fei, then ai,f is<br />

the concentration of the f-th substance in the i-th sample, <strong>and</strong> similarly<br />

using the given bases of R J <strong>and</strong> R K , ck,f is the fraction of photons the f-th<br />

substance emits at wavelength k, <strong>and</strong> bj,f is the intensity of the incident<br />

light at excitation wavelength j multiplied by the absorption at wavelength<br />

j.<br />

There will be noise in the data, so T will actually be of generic rank, but<br />

there will be a very low rank tensor ˜ T that closely approximates it. (For all<br />

complex spaces of tensors, there is a rank that occurs with probability one<br />

which is called the generic rank, see Definition 3.1.4.2.) There is no metric<br />

naturally associated to the data, so the meaning of “approximation” is not<br />

clear. In [1], one proceeds as follows to find r. First of all, r is assumed to<br />

be very small (at most 7 in their exposition). Then for each r0, 1 ≤ r0 ≤ 7,<br />

one assumes r0 = r <strong>and</strong> applies a numerical algorithm that attempts to<br />

find the r0 components (i.e. rank one tensors) that ˜ T would be the sum<br />

of. The values of r0 for which the algorithm does not converge quickly are<br />

thrown out. (The authors remark that this procedure is not mathematically<br />

justified, but seems to work well in practice. In the example, these discarded<br />

values of r0 are too large.) Then, for the remaining values of r0, one looks<br />

at the resulting tensors to see if they are reasonable physically. This enables

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!