Talk:Gaussian function
This level-5 vital article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||
|
Text and/or other creative content from this version of Integral of a Gaussian function was copied or moved into Gaussian function with this edit on 10 March 2014. The former page's history now serves to provide attribution for that content in the latter page, and it must not be deleted as long as the latter page exists. |
Wiki Education Foundation-supported course assignment
editThis article was the subject of a Wiki Education Foundation-supported course assignment, between 25 February 2020 and 8 May 2020. Further details are available on the course page. Student editor(s): NathanielJL.
Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 22:05, 16 January 2022 (UTC)
d parameter
edita,b and c are described in the intro paragraph, but d is not. Isn't d just the offset from 0?
The d parameter should not be part of this expression. Leaving it in there would mean that two thirds of the integrals below it are wrong. Its also not common, at least in physics, to have it there. [1] [2] 81.147.122.82 (talk) 09:00, 1 December 2014 (UTC)
In that case, shouldn't the mention of d elsewhere on the page be removed? It is currently used without a definition, which is not only poor style but also confusing. — Preceding unsigned comment added by 98.210.169.92 (talk) 05:12, 7 January 2015 (UTC)
Do we mean to say that
- gaussian functions are eigenfunctions of the Fourier transform,
or that
- eigenfunctions of the Fourier transform are gaussian functions
or neither? -- Miguel
Not all eigenfunctions of the Fourier transform are Gaussian. See Hermite polynomials. Michael Hardy 15:10, 30 Aug 2003 (UTC)
maximum entropy
editCould someone add something about Gaussian functions being the ones with maximum entropy? I think this can also be related to the Heisenberg uncertainty principle since momentum and position are canonical conjugate variables.
- This article links to normal distribution, which I suspect already gives that information. For non-normalized Gaussian functions, I'm not sure at this moment what the maximum-entropy statement would say. Michael Hardy 23:46, 27 Feb 2005 (UTC)
image
editare they all the same bell-shape? if so, let's get a picture! - Omegatron 17:51, Mar 15, 2005 (UTC)
Definition of
editThe function definition uses as parameters, while the graph uses as parameters. What is the relation between the two sets of parameters?
--NeilenMarais 20:18, 24 May 2006 (UTC)
- Answering myself, it seems from looking at Gaussian function that , and . One could mention this, or perhaps even better, generate an image using the correct parameters. Opinions? --NeilenMarais 20:27, 24 May 2006 (UTC)
Not entirely correct. Not all Gaussian functions are probability density functions, so a need not be a normalizing constant that makes the integral equal to 1.
But certainly I think the caption should explain the notation used in the illustration. Michael Hardy 21:29, 24 May 2006 (UTC)
Yes, I'd also like a better explanation of this. And also, for the 2D case, and are said to be the "spread" of the function, which term is not explained. Is it related to the FWHM?
Gaussian Function ..
editWould I be correct if i said that a gaussian function as such represents the values a variable can have .............that is to say ......it gives us a range of possible values of the variable , or it shows the region were the value of that variable lies ......
Is that what the Gaussian function does ... —The preceding unsigned comment was added by Hari krishnan07 (talk • contribs) 04:36, 3 December 2006 (UTC).
- Not directly. The gaussian function is the name for a function with specific properties e.g. as illustrated in the curves in the article. What you refer to is a probability distribution and can have the form of a gaussian Kghose 16:01, 16 December 2006 (UTC)
A function of the form are some kind of Gaussian functions?? --Karl-H 11:12, 27 January 2007 (UTC)
- I think not. This is related to Gaussian functions of course, but a true Gaussian function should not have the x^{2m} term in front. Oleg Alexandrov (talk) 18:41, 27 January 2007 (UTC)
Ambiguity or error in definition of sigma?
There seems to be an ambiguity or error in the definition of sigma here. If I am not mis-informed, sigma-x and sigma-y are the standard deviations of the function along the x and y axis respectively? If this is correct, then if a 2-d Gaussian ellipse is inclined at theta = 45 degrees it would have the same sigma-x and sigma-y as a circular 2-d Gaussian, but with the covariance = 0 for the circular Gaussian and non-zero for the elliptical Gaussian.
In the 3 plots showing rotation of the ellipse from theta = 0 to theta = pi/3, the values for sigma-x and sigma-y are the same, 1 and 2 respectively. This implies that here sigma-x and sigma-y are the standard deviations along the minor and major axis of the ellipse, not along the x and y axis of the function.
Could someone please clear this up? Also, an equation that relates the angle theta to the covariance term would be helpful.
""""jgreen —Preceding unsigned comment added by 75.75.90.207 (talk) 21:22, 20 September 2007 (UTC)
Is there a spurious factor of two infront of 'b' for the 2D gaussian? The Matlab code contains no 2, whilst the latek image of the equation does. —Preceding unsigned comment added by 220.239.69.107 (talk) 05:52, 16 October 2007 (UTC)
I'm inclined to agree with the last statement re:factor of two. See mathworld... —Preceding unsigned comment added by 74.74.223.195 (talk) 10:22, 21 June 2008 (UTC)
I believe to remember, that all derivatives of a gaussian are again gaussian. But there may as well be an additional condition to the polynominal. Could someone shed some light on this ? —Preceding unsigned comment added by 84.227.21.231 (talk) 16:53, 18 January 2009 (UTC)
I changed the wording on the definition of a Gaussian derivative, I do suggest a Math expert review to ensure the new description is accurate. So far this is the best resource on the web that I can find particularly with explaining Gaussian derivatives. Jon.N. —Preceding undated comment added 21:56, 8 August 2009 (UTC).
Algorithm? A misspelling?
edit- Gaussian functions arise by applying the exponential function to a general quadratic function. The Gaussian functions are thus those functions whose algorithm is a quadratic function.
Did the author mean "logarithm" instead of "algorithm" here?
FFT?
editI'm confused... I see that the FFT of a Gaussian is Gaussian, but in a discrete implementation using scipy's fft
to transform Gaussian functions, I get that σ -> N*5/(32σ) where N is the number of bins. This 5/32 seems like a weird magic number. What am I missing? —Ben FrantzDale (talk) 16:28, 2 April 2010 (UTC)
I agree, and can add that the magic number is . So that (at least in matlab), doing an fft on a Gaussian with sigma, c, results in a Gaussian with sigma: .--12.yakir (talk) 18:22, 27 August 2012 (UTC)
No mention of parabola
editSo, no mention of the fact that a linear-logarithmic Gaussian is a linear-linear parabola. I would imagine this would be considered a feature of note, but I don't know whether it is or not, or how to say that in a way that gives some substance to the fact. ᛭ LokiClock (talk) 06:21, 8 July 2010 (UTC)
Matlab code better in Python?
editBecause not everybody has access to Matlab due to its costs, I would like to see Python/Numpy code here instead. It is very similar, but nevertheless differs in some details. I could do the conversion myself, if people agree. --maye (talk) 10:12, 27 October 2010 (UTC)
- I agree. There has been an attempt in the past to write this into policy, but the discussion was sidetracked and then ceased. The policies on media content do not merely disallow content they can't legally republish, but disallows content under non-commercial licenses, because anyone should be able to use all of the encyclopedia freely. At least in this case, you would be acting towards that end, and I see no reason why MATLAB has particular significance in this article. ᛭ LokiClock (talk) 14:10, 27 October 2010 (UTC)
- Most Matlab code can be run using Octave, which is free and open-source. Alhead (talk) 21:00, 7 December 2011 (UTC)
Exponent's quadratic's form
editThe article states:
- "Gaussian functions arise by applying the exponential function to a general quadratic function."
But how does applying the one to the other lead to that form for the quadratic? This should probably go in a History section. ᛭ LokiClock (talk) 00:47, 26 March 2011 (UTC)
Also the quadratic function is not "general". It must be of the specific form (x-b)^2 and not general ax^2+bx+c. — Preceding unsigned comment added by 210.136.188.81 (talk) 16:34, 20 June 2014 (UTC)
Merging
editI guess they mean exactly the same thing, and normal distribution is the more formal name from the mathematical point of view. (Unsigned post.)
- Was suggestion to merge with Normal distribution.
- The Gaussian function does also have numerous applications outside the field of statistics, for example regarding solution of diffusion equations and Hermite functions and regarding feature detection in computer vision. If this article would be merged under normal distributions, these connections would be lost. Hence, I think that it is more appropriate to keep this article with appropriate cross referencing. Tpl (talk) 11:53, 8 June 2011 (UTC)
- Merge templates go on the articles not on the discussion page. New stuff on discussion page goes at the end. Have removed merge template as merge would not be good. Melcombe (talk) 14:44, 8 June 2011 (UTC)
Typo?
editCan someone verify that the equation for 2-dimensional Gaussian elliptical is correct as currently stated:
Or should the second term in the exponent actually be negative like so:
I'm drawing from this page (wikipedia.org) and other sources for the bivariate gaussian pdf. — Preceding unsigned comment added by 129.123.61.172 (talk) 21:04, 28 September 2011 (UTC)
- Well, it seems the only difference is the change of sign of the parameter . Then it is simply the matter of definition. Bakken (talk) 10:52, 1 October 2011 (UTC)
Multivariate gaussian
editAn undefined variable called B keeps showing up inside the "Multivariate gaussian" paragraph. I have no idea if it is simply: B = A, or if there is something subtle going on here, but either we need to add a definition for B, or just remove it from the math. — Preceding unsigned comment added by 2001:620:600:6000:ECDF:3A24:4CBA:7595 (talk) 11:47, 15 April 2013 (UTC)
- I believe you are right, I just let B = A in one place. But latter, there is a B that is defined as B = A + A' and I didn't yet figure out what it means. Bdmy (talk) 12:23, 15 April 2013 (UTC)
- What's worse, I believe the definition of the Gaussian including the shift vector is wrong, or at least not explained enough. The definition used in the multivariate normal is something along the lines of , where n is the normalisation factor, A is the inverse of the covariance matrix and s is a shift (the mean for multivariate normal distributions). In the article it says . According to my calculations, experiments and intuition those are not equivalent. For instance, with larger s, the value of the peak becomes larger, while this does not happen with the first definition. Also, if it's only shifted, the integral should not change. So imo, whatever it is, it is not a shifted gaussian. --128.130.118.54 (talk) 10:41, 24 October 2019 (UTC)
Proposed merge with Radial basis function kernel
editMathematically the same concept, except that the other page presents a two-argument version (i.e. where b is not a constant) and describes a single use case in more detail. QVVERTYVS (hm?) 19:51, 18 April 2014 (UTC)
Arguments Against
editRBF, while related to gaussians, is most identifiable as a type of kernel. It makes more sense to keep this separate since gaussians are a huge topic and someone is more likely to find this page while reading about different kernel types than applications of gaussian functions. — Preceding unsigned comment added by 2602:306:371B:9920:4D9:8168:7CF6:D618 (talk) 00:30, 20 May 2014 (UTC)
Why the 2 in the denominator?
editIn the definition of a Gaussian (as given in the lead section of the article):
there is a 2 in the denominator of the argument to the function. Why is that 2 there? This is not explained in the article. The function would still be a Gaussian if the 2 would not be there, and it would have a simpler expression. I thought the 2 had something to do with that the integral from to would have a simpler expression, since a normal distribution function always should have the area 1, indicating that the integral of a Gaussian may be an interesting property. However, looking further down in the article, it turns out that the integral is
so the 2 didn't simplify the expression of this integral either, just complicate it, since without the 2 in the denominator on the left hand side, there is no 2 in the square root on the right hand side either. So what does the 2 do there? It doesn't really feel like it belongs in the expression. —Kri (talk) 21:36, 12 October 2014 (UTC)
- No one knows?? WTF! I'm starting to think that it is someone who has just pulled a prank by putting that 2 there... —Kri (talk) 21:17, 22 October 2014 (UTC)
- Now I know why the 2 is there. It comes from the expression of the normal distribution, in which c, or σ which it is called there, is the standard deviation, so without the 2, c wouldn't equal to the standard deviation (although I saw now that the article already mentions that c is the standard deviation). So, mystery solved. —Kri (talk) 12:29, 30 October 2014 (UTC)
Does the opening formula actually work properly?
editAt the peak, when x=b, everything in the large parenthesis should be 0 ( x-b=0, so (x-b)^2=0, so -((x-b)^2)/(2c^2)=0 ).
This should result in: a EXP 0 = 1.0, regardless of the value of a. But the result should be a, the peak value, by definition of a.
So what part of this am I not understanding? The meaning of EXP? Thanks. — Preceding unsigned comment added by 67.249.?.? (talk) 22:24, 4 February 2015 (UTC)
Ok, I'm reading that EXP usually (but not always) means the constant e (2.718...) raised to a given power. It would probably be a good idea to make this explicit somewhere. Anyway, at the peak this value should still be 1.0 for the reasons given above, but I have to admit my code is working beautifully now!
Be more analytical about neural networks
editSource for Meaning of parameters for the general equation
editCan anyone provide the source of the equations listed under Meaning of parameters for the general equation, i.e. coefficients a, b and c? I have calculated them on my own and a couple of signs turned out different. I wonder if there's a textbook that contains the full derivation or the outcome.
Should mention be made that "gaussian" is often not capitalized?
editSince the lower-case "gaussian" is pretty common now, I think adding this to the opening sentence is wise.
History
editWhy is this called gaussian, since it already appeared in the works of de Moivre ?
I think it would be useful to add some history part saying that this was not invented by Gauss. 2001:861:3008:37D0:34F7:B9ED:A9EB:BEF3 (talk) 20:30, 24 April 2023 (UTC)