Your information about quadrature signals is valid, but mostly irrelevant to FFT calculations.
Quadrature eliminates Nyquist Folding, where a 999Hz signal and a 1001Hz signal sampled at 1000Hz both alias to 1Hz.. strictly speaking the 999Hz signal aliases to -1Hz while the 1001Hz signal aliases to 1Hz, but there's no way to tell those frequencies apart just looking at the waveforms. Quadrature makes it possible to distinguish positive frequencies from negative frequencies.
(Aliasing to a negative frequency is what makes wagon wheels in old movies look like they're rotating slowly backwards)
FFTs should never see a collection of samples where negative frequencies are possible. That breaks the Nyquist criterion that the sampling frequency must be more than twice the highest frequency of interest to reconstruct frequency and amplitude. You can identify the existence of most signals at exactly half the sampling frequency, but the sampler will always catch the waveform at the same level, making the signal alias to a DC value that depends on the relative phases of the signal and the sample collector.
To get frequency and amplitude, you need more than two samples per cycle of the input waveform so the sample collector will (eventually) see the input signal at all amplitudes.
Now, it is common to represent periodic functions with complex exponentials.. Euler's great insight was that doing so reduces trigonometry to algebra and makes life a heck of a lot easier. When we look at complex exponentials as a mental model, they represent a point rotating around the X axis, at frequency equal to the x coordinate, in an X-Y-i coordinate space.
FFTs don't care about that, though. They operate on sine waves.. specifically the fact that the product of sine waves at two different frequencies is equivalent to the sum of sines at two other frequencies: (f1+f2)+(f1-f2).
That waveform, being composed of two sine waves, spends equal amounts of time positive and negative of the X axis, so its integral over a sufficiently long time will be zero.
There's one catch though: if you multiply a sine wave by itself, every output value is the square of some input value. Squares can't be negative, so the integral of a sine wave multiplied by itself is proportional to the amplitude of the input waveforms.
The Fourier transform does that 'multiply by frequency f(x) then take the integral' with an arbitrary waveform as one input and a sine wave at every frequency in a given range as the other input. For any value of x, the product of the input and f(x) turns any amount of f(x) in the input to a value that can never be negative, and components of every other frequency into values that will integrate to zero. Therefore any nonzero value in the integral must come from an f(x) component of the input.
The graph of all those integrals tells you all the frequency components that make up the input.
The problem with the Fourier transform is that it's a continuous function and extremely difficult to calculate mechanically.
The FFT solves a simpler version of the problem: discrete Fourier transforms. It uses a series of samples as one input, and values of f(x) sampled at the same frequency, for some discrete set of f(x) frequencies. That makes all the calculations straightforward, but the number of calculations you have to do increases with the square of the number of samples. Programmers refer to that as O(n^2) time, which gets prohibitively expensive no matter how fast your hardware is.. eventually we can find a value of n whose n^2 is too big to be reasonable.
One property of the regular DFT is that you end up having to re-calculate products of the same values over and over. In 1965 James Cooley and John Tukey worked out a way to calculate products once, save them, and simply look them up when they're needed again. That reduced the amount of work to O(n*log(n)), which is low enough to be reasonable.. now almost every complicated signal in the world gets FFT'd somewhere along the way.
Getting back to complex numbers and frequencies: the complex/imaginary part of an exponential frequency shows how signals would behave if they were helices in a complex space, but that's an artifact of the tool we used to make the math easier, not a physical reality. When we want to talk about the actual sine wave, we drop the complex/imaginary component and only pay attention to the real component.
cf20855 wrote: ↑Mon Sep 19, 2022 10:50 am
When I plotted the FFTZero output, however, I noticed a strange phenomenon. The plotted signal amplitude slowly wandered up and down.
What value are you referring to? An FFT delivers a table of amplitudes that represent frequency components of the input.
If you're just selecting the amplitude at a specific frequency, the up-and-down suggests your selected frequency is slightly off from the frequency in the samples.. you're capturing part of the (f1+f2)+(f1-f2) where f1 and f2 are close together, but not exactly equal.
One side effect of the DFT is that it can't identify frequencies exactly, since it only multiplies-and-integrates a discrete set of reference frequencies. Real frequency components near each other get assigned to 'bins' in the output.
Try plotting the bins on either side of the one you're using now, and see if they show an equal-but-opposite wobble.