Talk:Kramers–Kronig relations

Latest comment: 6 years ago by Sbyrnes321 in topic Deleting "Spatial Kramers-Kronig" section

Untitled

edit

Since the Kramers-Kroning dispersion relations relate to so many topics in signal theory and physics, a more general discussion might be appropriate.

Why not start with a general causal transfer function and use this to show the relations.

Specific examples for such a transfer function \chi such as filter, susceptibilities, etc. could be listed at the end of the article.

ben

Personally, I would prefer stating the relations as \chi(\omega)=\frac{1}{\pi} P \int_{-\infty}^{\infty} d\omega' \frac{\chi'(\omega')}{\omega-\omega'}

ben

May be

 

would look better? Is the current version equivalent of this?

--dima 23:28, 8 August 2006 (UTC)Reply

Toll

edit

Why did somebody remove the reference to the important article by Toll? It didn't hurt, did it, to have a pointer to causality and Kramers-Kronig? (It took Kronig to 1942 before he saw the connection clearly). --P.wormer 10:47, 23 February 2007 (UTC)Reply

I had rewritten the article without paying too much attention to the existing references -- pls revert my changes if for the better. Chuck Yee 09:35, 24 February 2007 (UTC)Reply

there is a mistake in the integrations of the first two equations: the argument in chi1 and chhi2 should be w' and no w.

WikiProject class rating

edit

This article was automatically assessed because at least one WikiProject had rated the article as start, and the rating on other projects was brought up to start class. BetacommandBot 09:57, 10 November 2007 (UTC)Reply

Problem of sign

edit

It seems there is an error with the sign. As a simple example the Fourier transform of the Heaviside function can be used:

 

Then finding the imaginary part from the real part leads with the expressions proposed to

 

which is actually the opposite of what it is expected. --193.54.84.254 (talk) 15:15, 21 November 2008 (UTC)Reply

Over-simplification?

edit

'The segment at infinity vanishes since we assume χ(ω) vanishes as we take  ' --- I'm not sure if it's true in general... —Preceding unsigned comment added by 93.105.182.2 (talk) 11:26, 25 December 2008 (UTC)Reply

Canonical URL?

edit

The canonical URL for this article has three unicode characters in between Kramers and Kronig. http://en.wiki.x.io/wiki/Kramers%E2%80%93Kronig_relation Any reason for this? Shouldn't the canonical URL be simply http://en.wiki.x.io/wiki/Kramers-Kronig_relation (i.e. an ASCII dash/minus '-' 0x2D in hex) Thanks! Woz2 (talk) 11:45, 14 May 2009 (UTC)Reply

I do not know anything about canonical URL, but there should be an en dash between Kramers and Kronig, per WP:DASH. Looking here[1] that is indeed E2 80 93 in UTF-8 hex, so it seems to be correct. -- Crowsnest (talk) 14:43, 14 May 2009 (UTC)Reply
Thanks! (BTW, by "canonical URL" I meant "non-redirected URL": i.e. the one and only one that doesn't have a "(Redirected from..." notice on it. By counter example:

Kramers-Kronig_relation

which contains:

[[2]]

is a non-canonical or redirect URL) Woz2 (talk) 16:02, 14 May 2009 (UTC)Reply

Time-domain proof

edit

The section on the time-domain proof states: "This proof covers slightly different ground from the one above in that it connects the real and imaginary frequency domain parts of any function that is causal in the time domain, and bypasses the condition about the function being analytic in the upper half plane of the frequency domain."

I believe that any function that is causal in the time domain is analytic in the upper half plane in the frequency domain. Certainly the proofs in the time- and frequency-domains have to be equivalent. Can anyone confirm or refute this and does anyone know a good reference for it?

--DJIndica (talk) 16:13, 4 November 2010 (UTC)Reply

DJIndica is correct on this, and I think the term bypasses can be misleading: the time-domain proof does not remove the analyticity requirement, it only expresses it differently (causality -> connection between the even and odd components). Dcconsta (talk) 10:30, 16 November 2013 (UTC)Reply

Yes. I'll fix it. Woz2 (talk) 01:24, 9 April 2014 (UTC)Reply

causality and analyticity

edit

From the lede:

These relations are often used to calculate the real part from the imaginary part (or vice versa) of response functions in physical systems because causality implies the analyticity condition is satisfied, and conversely, analyticity implies causality of the corresponding physical system.

Why is causality the same as analyticity? Could the article briefly explain that or else link to something that does? I'm guessing that in this case, what "causality" means is that if two input function agree before a certain time, then the response functions also agree before that time. Is that correct? (I've actually taken the liberty of directing the "causality" link to causal system rather than to causality (physics).) Michael Hardy (talk) 17:50, 9 November 2011 (UTC)Reply

Hi Michael, I don't know the answer to your question, but the lead cites John S. Toll (1956). "Causality and the Dispersion Relation: Logical Foundations". Physical Review 104: 1760–1770. Bibcode 1956PhRv..104.1760T. doi:10.1103/PhysRev.104.1760. and in the body of the article it says " It can be shown (for instance, by invoking Titchmarsh's theorem) that this causality condition implies the Fourier transform is analytic in the upper half plane." and gives Jackson as the reference. And I'm not sure what the difference is between causal system versus causality (physics) nor which is more appropriate. In fact one might make a case that causal system and causality (physics) should be merged hth. Woz2 (talk) 18:45, 9 November 2011 (UTC)Reply

"Causality" in this context means that the response must come after the stimulus (or at most coincident with it, in the limit of an instantaneous response), not before the stimulus. In particular, using the notation in the article, say P(t) is the response and F(t) is the stimulus ("force"). In a linear time-invariant system (as assumed by K–K), then P must a convolution of F and χ:
 
where χ is the "susceptibility" of the system (some kind of Green's function, essentially). Causality, the statement that P(t) cannot depend on   for  , means that   for  . Then if you write down the Fourier transform of χ, this means you only integrate over  
 
It is then easy to show that   must be an analytic function when  . For example, there cannot be any poles in   for   because in that case the Fourier integral is an integral of   multiplied by an exponentially decaying function, so it cannot blow up. (Unless χ is exponentially growing or has some other singularity, but in that case it is not a tempered distribution and its Fourier transform is not defined. Another way of putting it is that exponentially growing χ implies some kind of gain or exponential growth, which is not possible in a passive system, and in any case is unphysical without nonlinearity.) (Another viewpoint is that the Fourier transform of a causal susceptibility χ is a Laplace transform, whose analyticity properties are well known.)
Once you have analyticity, plus   vanishing sufficiently fast with ω then you can do the various contour integrals to get the K–K relations. All of this can be found in any standard textbook. e.g. you often find it in electrodynamics texts, e.g. Jackson or Landau & Lifshitz. (Usually also you assume some additional properties of the system, e.g. you typically assume χ is real and hence   as mentioned in the article, and in a physical system the second law of thermodynamics usually implies nonzero dissipation and hence   for real ω>0. In these cases you can derive some additional facts or simplifications.)
— Steven G. Johnson (talk) 22:40, 9 November 2011 (UTC)Reply
There should really be a link to this argument in the article. I looked at Causal_filter, Analytic_signal, Laplace_transform, Z-transform but none of them quite do it. I do think I saw it somewhere on Wikipedia though. 84.227.225.36 (talk) 10:42, 7 April 2014 (UTC)Reply
There is actually an even deeper argument. Instead of assuming causality plus sufficiently fast decay, it turns out to be sufficient to assume only passivity, which means that the response P(t) does no net work (although it can have work done on it), for a suitable definition of energy. Surprisingly, because of some deep results in functional analysis, passivity implies causality and analyticity in the upper half plane, and   (in fact, you fall into an important category called "Herglotz functions"). The classic reference on this stuff is Zemanian, Realizability Theory for Continuous Linear Systems (1972).

Symbol for Fourier transform

edit

  is used for the response function,   for its Fourier transform. To me this seems to call for two different function names or a hat, or is it customary to specify a function by its independent variable alone? But I don't know what choice to make for a suitable letter that fits with conventional choices. 84.227.225.36 (talk) 10:42, 7 April 2014 (UTC)Reply

I've seen proofs that use lower case for time domain and upper case for the equivalent frequency domain. Woz2 (talk) 23:36, 16 April 2014 (UTC)Reply
The notation described is very common. Mathematicians wouldn't use it, but in many settings it is unambiguous enough to work. 178.39.122.125 (talk) 15:33, 8 February 2017 (UTC)Reply

What class of systems?

edit

From the article:

Fortunately, in most systems, the positive frequency-response determines the negative-frequency response because χ(ω) is the Fourier transform of a real quantity χ(t),...

What would be typical examples, situations, or a characterization of when real-ness is violated? I thought χ(t) and the signals it operates on (gets convolved with) were ALWAYS real. 178.39.122.125 (talk) 15:31, 8 February 2017 (UTC)Reply

Deleting "Spatial Kramers-Kronig" section

edit

User:Docudrama3 added a section based on this paper by Horsley et al.: https://doi.org/10.1038%2Fnphoton.2015.106 . I am deleting it. I don't really have anything bad to say about the text, nor about the Horsley paper, which I think is a lovely paper. My complaint is that discussing it is undue emphasis. There are literally tens of thousands of papers in the literature that use or elaborate on Kramers-Kronig, there is an entire book written about Kramers-Kronig. Describing this one particular paper, one out of those tens of thousands, feels like very disproportionate emphasis, and sorta misleads readers who are trying to get a balanced introduction to Kramers-Kronig in general. I feel like this article could be 10 times longer than it is now, while only covering topics much more important to understanding Kramers-Kronig than the Horsley paper is, e.g. aspects of Kramers-Kronig that are widely-used in theory and practice for decades. --Steve (talk) 01:03, 22 June 2018 (UTC)Reply