YouTube already supports that natively these days, although it's kind of hidden (and knowing Google, it might very well randomly disappear one day). Open the description of the video, scroll down and click "show transcript".
Although the end goal of a PhD is a specialized thesis, the first couple of years generally involves courses with a wide coverage of analysis and algebra at the graduate level.
Given her achievements, I'd be very surprised if Cairo hasn't already covered the material in an undergrad degree
The set of all real->real functions is still a vector space.
This vector space also has a basis (even if it is not as useful): there is a (uncountably infinite) subset of real->real functions such that every function can be expressed as a linear combination of a finite number of these basis functions, in exactly one way.
There isn't a clean way to write down this basis, though, as you need to use Zorn's lemma or equivalent to construct it.
If you restrict yourself to Lebesque-integrable functions, can’t you take the complex Fourier transform of the function and call the terms of the Fourier series a basis, with the coefficients being the components of the vector of the function? This is a bit above my current mathematical paygrade, so forgive me if I’m not expressing the idea accurately but I’m learning a lot both from the article and the ensuing discussion - hopefully you understand what I’m getting at.
I think what I may be asking is “Does the complex Fourier transform make a Hilbert space?” but I might be wrong both about that and about that being the right question.
Another example is the eigenvectors of linear operators like the Laplacian. Recall how, in finite dimension, the eigenvectors of a full rank operator (matrix) form an orthonormal basis of the vector space. There is a similar notion in infinite dimension. I can't find an English page that covers this very well, but there's a couple of paragraphs in the Spectral Theorem page (https://en.wikipedia.org/wiki/Spectral_theorem#Unbounded_sel... ). The article linked here also touches on this.
Regarding your last sentence, one thing to note is that having a basis is not what makes you a Hilbert space, but rather having an inner product! In fact, to get the Fourier coefficients, you need to use that inner product.
That's awesome info thank you so much. Reading it, a Hilbert basis is exactly what I am talking about. It's always exciting when my intuition guides me on the right path. I'll check out the Spectral theorem page also.
for some coefficients a_k and b_k as long as f is sufficiently nice (I don't remember the exact condition, sorry).
This is very useful, but the functions sin(x), sin(2x), ... , cos(x), cos(2x), ... don't constitute a basis in the formal sense I mentioned above as you need an infinite sum to represent most functions. It is still often called a basis though.
Thanks for that. That’s the trigonometric Fourier series. A complex Fourier series (which is what I mentioned) is equivalent and works similarly except that the terms are all uniform, so
f(x) = sum n=-infty to +infty C_n e^{i n x}
You can derive one from the other by using the identities
sin x = (e^(ix) - e^(-ix))/2i
cos x = (e^(ix) + e^(-ix))/2
I specifically mentioned the complex series because I didn’t like the fact that the alternating terms use a different trig function and it seemed weird to me to have every second dimension in a space be different in that way but they are equivalent.
The convergence criteria for Fourier series vary depending on how strongly you need convergence but I think basically if a function is differentiable on the interval you care about then the Fourier series provably converges on that interval and otherwise if it has jump discontinuities and that sort of thing, then depending on whether it is square-integrable or a bunch of other properties) you can prove weaker forms of convergence (absolute, pointwise etc).
To address your comment I don’t see why an infinite sum prevents something being a basis. In fact I would specifically say that can’t be true because then there would never be a basis for any infinite-dimensional space- any time you want to take an inner product in such a space you need an infinite sum, and you need such an inner product to construct the basis. A sibling comment pointed me in the direction of a Hilbert basis, which seems to be what I was thinking of.
If you're familiar with Zorn's Lemma, the construction is just to order bases by inclusion and to consider chains created by noting that there must be an independent dimension and adding it inductively. You can upper bound each of these chains by unioning the members of the chain (which preserves linear independence). By Zorn's Lemma that means there is a maximal linearly independent system and if an element existed outside of that system's span it would contradict that maximality.
He doesn't need to talk about it (though you might like to look up the notorious Hilbert's Basis Theorem); it happens to be the case that any vector space has a basis, but even if you don't know that, a vector space is still a vector space and its elements are still vectors.
reply