There we go.. thank you.

It looks like the spreadsheet is doing spectrum reconstruction: creating a graph of the light frequencies lighting the sensor from the digital readings of each sensor element. The sensor readings are single digital values, so the general goal is to convert each value to a range of amplitudes at multiple frequencies.

From the shapes of the lower curves, it looks like the math is based on Fourier transforms. That will take some explanation...

Mechanically, Fourier transforms are based on two parts: multiplication of sine waves and integration.

Integration is easiest to break down in discrete form.. if you start with a smooth continuous function (like a sine wave), you can approximate it by making a bar graph: a series of equal-width rectangles that meet the curve at their upper-left corner. It's easy to multiply the height of each rectangle by its width to get the area, then you can add the rectangle areas together to get an approximation of the area under the curve. The difference between the approximation and the real area will be a series of roughly-triangular segments between the top of the rectangle and the curve.

If you make the rectangles narrower, the triangular segments get smaller, and the difference between the combined area of the rectangles and the area under the smooth curve becomes smaller. The ideal situation would be to make the width of each rectangle 0, but that leaves you trying to get a useful value multiplying zero by infinity.

The entire subject of integral calculus involves finding the conditions where you *can* get a useful value from 'zero times infinity' and calculating that kind of value. (The subject of differential calculus mostly involves getting useful values from 'zero divided by zero'). The logical jump from 'small but finite' to 'infinitely small' is interesting, but doesn't add anything useful to the current subject.

The integral of a sine wave is zero because the curve spends equal amounts of time-and-amplitude positive and negative. But -- and this is a key point of interest -- the square of a sine wave can never have negative values. Any value multiplied by itself, positive or negative, produces a positive result. Therefore the integral of a sine wave multiplied by itself has to be a positive, nonzero value.

That leads us to the general subject of multiplying sine waves by each other. The algebra of periodic functions is interesting in general, but the relationship between multiplication and addition is especially interesting:

sin( f1 x f2 ) = (sin( f1 + f2 ) + sin( f1 - f2 )) / 2

For a frequency multiplied by itself:

sin( f1 x f1 ) = ( sin( f1 + f1 ) + sin( f1 - f1 ) ) / 2 = sin( 2f1 ) / 2

but any product of different frequencies is the sum of two sine waves.

Addition and subtraction are preserved under integration, so:

Integral( f1 + f2 ) = Integral( f1 ) + Integral( f2 )

both of which will always be zero for the frequency components in the product of sines at two different frequencies.

With all of that background, we can finally get to the payoff: the integral of the product of any two frequencies will always be zero if the frequencies are different. If both frequencies are the same, the integral will be nonzero and proportional to the amplitudes of the two input signals. We generally choose the amplitude of the test frequency to be 2 so the integral equals the amplitude of that frequency component in the input signal.

From there, we move to the concept of 'transforms': operations that turn a function of one variable (call it 't') to a function of another variable (call it 's'). For time-based signals, like the instantaneous amplitude of light, we treat the 'x' in sin(x) as a function of time: sin( x(t) ). The Fourier transform is a function that multiplies a periodic input signal by every possible frequency, and the value of Fourier-transform-of-sin-t( s ) is the integral of sin( t ) x sin( s ).

The only nonzero values of a Fourier transform are proportional to the amplitude of frequency components that exist in the input signal.

Since the Fourier transform takes a time-based signal as input and produces a graph of frequencies as output, we say it maps functions from 'the time domain' to 'the frequency domain'.

All of the math associated with the Fourier transform is reversible, so if we have a graph of frequencies as input, we can use a reverse Fourier transform to turn it back to a time-based signal. The time-domain function and the frequency-domain function are different but equivalent ways to represent the same signal.

The relationship between time-domain and frequency-domain lets us do useful and interesting things, like creating filters. An ideal 1kHz filter allows every frequency below 1kHz to pass through unchanged, while blocking every frequency higher than 1kHz completely. In the frequency domain we can represent that as a rectangular pulse whose value is 1 from 0Hz to +/-1kHz (negative frequencies are an effect of periodic interaction where signals seem to move backward:

https://www.youtube.com/watch?v=uENITui5_jU) and 0 for all frequencies beyond +/-1kHz.

The frequency-domain representation of running a signal through the perfect 1kHz filter is simply multiplying the frequency-domain input signal by the frequency-domain filter pulse. Then we can translate that back to the time domain with a reverse Fourier transform. The reverse-transformed equivalent of multiplication in the frequency domain is an operation called 'convolution' which is roughly demonstrated by this video:

https://www.youtube.com/watch?v=MNzBFgw ... RobScallonInstead of taking the integral for each frequency value, you start playing a new copy of the input signal, multiplied by the amplitude of the filter function, at each instant. The combined sum of all the time-offset signals is the time-domain convolution of signals multiplied in the frequency domain.

If we just reverse-transform the rectangular pulse of a perfect filter, we get the time-domain function sin( x ) / x, sometimes called 'sinc', which looks like this:

The lower set of graphs look like sinc() curves, centered around different frequencies. Without doing the actual transforms, I think they're frequency-domain equivalents of the pulse shapes in the upper set of graphs.

That makes sense because of another neat thing you can do by moving between the time and frequency domains: math is reversible, so if you can convolve an input signal with a filter function to get a filtered output signal, you can deconvolve the output signal with the filter function to reconstruct the original input.

That applies directly to what's going on in the spreadsheet: each color segment is a bandpass filter for a given range of frequencies. Taking a reading from the sensor is functionally equivalent to convolving the input spectrum by the filter function to get a discrete-valued reading. If we deconvolve that reading with the filter function, we can get an approximation of the whole input spectrum.

Naturally the reconstruction isn't perfect.. operations like 'rounding to the nearest integer' and aren't fully reversible, and ideally reversible ones like 'divide by a large number' don't reverse well on computers with finite-precision math.. but you can get reasonably close.

Since we have multiple color filters, the deconvolved approximations from all of them can be combined to make a reasonably accurate approximation of the original input light spectrum.