Talk:Finite difference
This is the talk page for discussing improvements to the Finite difference article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
This article is rated B-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||
|
Better description needed
editThe description of finite difference methods seems somewhat muddled - there appears to be a confusion between particular finite difference methods and the general definition of the term. It seems to me that at the very least the article ought to partition itself a little better - for instance, if difference operators are mentioned, it would be nice to at least indicate that there is more than one and that all are derived from Taylor expansions. I don't necessarily feel myself qualified to make any modifications that might be necessary - if anyone is, then perhaps they could edit this topic?
If I could expand on this. "Finite Differences" is about replacing derivatives by differences, it can be applied in 1 dimension or several and to any order of derivative. Perhaps a few examples rather than one would be more informative. "Finite Difference Equation" arises when we substitute finite differences for the derivatives in a differential equation. Closely allied to this is the idea of a method of solution, by putting unknowns in terms of known quantities. In ordinary differential equations this gives a variety of solution methods. In PDEs the methods are called Finite Difference Methods.
Crank-Nicolson
edit(Note: Nicholson is a common misspelling of Nicolson. Refer to original paper (1947) or The Mathematics of Diffusion by John Crank.)
Regarding the Crank-Nicolson, I am pretty sure that the convergence order is and not .
The advantage over the implicit method is basically in the improvement of the convergence speed regarding the time step, which is instead of of the implicit method.
I decided to post it here so we may discuss it instead of overwriting it, just in case to be sure.
I am copying the discussion from User Talk pages:
- Hello. You wrote in finite difference that Crank-Nicolson is fourth-order in space. Could you please make sure that this is true, preferably by giving a reference? I'm pretty certain that Crank-Nicolson is second-order, but it might just be possible that it is fourth-order for this specific setting, and I won't be in my office for some time so it will be hard for me to check it. -- Jitse Niesen (talk) 22:40, 20 December 2005 (UTC)
- I will look for a paper source.
- But the order can be easily derived.
- Lets take the first six members of the Tailor series of at time t=Nk+1/2
- we can find the second derivative over x, then find first derivative over time from the heat transfer equation, then find the second derivative over space over df/dt and find the second derivative over time etc.
- Lets take the first six members of the Tailor series of at time t=Nk+1/2
- Thus the Tailor decomposition over x and t will look like:
- where D is the thermal diffusivity.
- if we substitute the obtained equation for x=-h,0,+h and t=-k/2 +k/2 into Crank Nicolson scheme, we will see all the terms written are cancelled and the Crank-Nicolson scheme is exact. Thus, the errors came from the terms for x^6 and t^3 and so they are proportional to h^4 and k^2 (we take 2nd order derivative over x and the first over t).
- Obviously, to get the fourth order over h we should take care over the boundary conditions. E.g. maintaining the adiabatic b.c as would destroy the 4th order instantly, but the scheme itself is 4th order (if the coefficients are constant).
- BTW for the explicit scheme for the special case of r=1/6, we a getting the quadratic error over t and fourth orders over x. I am not sure if we should mention it. abakharev 00:08, 21 December 2005 (UTC)
- The easiest way of checking this is to plug the Crank-Nicolson scheme into mathematica, subtract from it the exact derivatives (heat equation that is), and expand all in Taylor series (mathematica has a command for that). One can also do Maple too I think. Symbolic calculations saved my ass a lot of times when dealing with messy finite-difference schemes. Oleg Alexandrov (talk) 16:25, 21 December 2005 (UTC)
Implicit Method (backward difference)
editThe recurrence equation in description of the Backward difference is incorrect. Looks like the recurrence relation of forward difference has been restated again.
Backward difference recurrence equation should be Y(x) - Y(x-1)....
- I'm afraid I have no idea what you mean. The right-hand side of the recurrence relation for backward differences is at time n+1, and the rhs for forward difference is at time n. There is no Y defined in the article. -- Jitse Niesen (talk) 05:39, 10 April 2006 (UTC)
Reorganization
editI'm proposing a reorganization of this article and finite difference schemes on Talk:Finite difference schemes. Please comment. -- Jitse Niesen (talk) 11:52, 31 July 2006 (UTC)
Mathematical analysis vs. numerical one
editI think there are two different notions of finite difference in math, one is in numerical analysis and another one is in mathematical analysis.
- in the first case it is an "mathematical expression of the form f(x + b) − f(x +a)". In second it is an operator which is not a "mathematical expression".
- in the first case it is denoted as while in the second . I have never seen the first notation applied to an operator.
- the mathematical books can divided in to two groups: first group books mention only forward and backward differences and say nothing about operators, while second group books work with operators only.
- is usual for operators but almost nonsense for numerical analysis.
It's like operators in functional analysis and algebra — much in common but different notions. Mir76 17:46, 16 March 2007 (UTC)
- I'm not sure that I understand what you say, but I disagree with there being separate meanings in numerical and mathematical analysis:
- 1. In my experience, in both fields "finite difference" usually refers to a "mathematical expression of the form f(x + b) − f(x + a)", with the operator being called a "finite difference operator". Can you name some books that use "finite difference" to refer to the operator?
- Yes, it is quite common experience of specialist in numerical analysis :) Ok, here goes the first book fell into may hands: Paul L. Butzer, Hubert Berens "Semi-group of operators and approximation", Springer-Verlag, Berlin Heidelberg New York, 1967. Page 257:
Let be the right difference of a function f...
- I think I have to explain something. Here is a shift operator (f(x) –> f(x+h)) and is an identity operator (f(x) –> f(x)). It is the only definition of diffrencies in this book and there are no central differencies in whole book at all.
- Next example: Charles K. Chui "An introduction to wavelets", formula (4.1.9), translated back from Russian:
We will use backward differences defined as ; .
- I'm almost sure I saw something similar in Ingrid Daubechies' "Ten lectures on wavelets" but I cannot find it right now. I can say that Berens and Chui both are very high class mathematicians.
- Moreover every book dealing with Besov spaces will use this notation. Mir76 17:55, 18 March 2007 (UTC)
- It seems to me that Chui's quote agrees exactly with what I said: The expression is called a "backward difference". The "backward difference operator" is Δ, and the result when you apply this operator is the "backward difference". Similarly, Butzer & Berens say that (the result of applying the operator to a function f) is a "right difference".
- Now, it happens quite often that functions and their result is confused in mathematics, so people talk for instance about "the function " even though it's more precise to say "the function ". So I would argue that possible confusion between the operator and the result of applying the operator does not show that the notion of "finite difference" in mathematical analysis differs from that in numerical analysis. -- Jitse Niesen (talk) 10:56, 19 March 2007 (UTC)
- 2. The notation is common in numerical analysis and other branches of applied mathematics to refer to the operator L, applied to the function f, and the resulting function evaluated at x; see for instance calculus of variations. The idea is to distinguish operators (which are applied to functions) and functions (which are applied to numbers) by using square brackets [] for the former and round parentheses () for the latter. However, the notation is by far not the only one used in numerical analysis. I've also seen , and . The subscript h is also used frequently in numerical analysis.
- I think you are right here but when I have added to article, Oleg Alexandrov reverted it with some edits of unknown person. That's why I have arranged separate section for analysis. Mir76 17:55, 18 March 2007 (UTC)
- 3. It's true that some books do not look at the operator aspect. However, there are plenty of numerical analysis books that work with operators. For instance, the equation uses operators, yet I got this equation from a numerical analysis book.
- Does that book work with shift operator above? Do all students of numerical analysis understand about operator's notation? I have an experience that answer is No. Mir76 17:55, 18 March 2007 (UTC)
- Yes, it uses the shift operator. Regarding your second question, I don't know quite what you mean. I expect that all students will be able to understand the operator notation if it is explained to them. It's not a big deal after all.
- You seem to underestimate the amount of mathematical analysis used in numerical analysis. Semi-groups of operators are used in numerical PDEs, approximation theory is a standard part of courses in numerical analysis, and wavelets are arguably as much numerical analysis and mathematical analysis. -- Jitse Niesen (talk) 10:56, 19 March 2007 (UTC)
- Does that book work with shift operator above? Do all students of numerical analysis understand about operator's notation? I have an experience that answer is No. Mir76 17:55, 18 March 2007 (UTC)
- 4. Where do you get the formula from? That seems sloppy notation: the domain of Δ is a space of functions so you can't apply Δ to itself.
- Surely I can. It is an operator's composition :). See, for example, Chui above. It is really very common definition. That's why I need separate section — to protect this definitions from numerical analysis students. Mir76 17:55, 18 March 2007 (UTC)
- Well, normally composition is denoted like . In the quote of Chui, he uses which is not the same as .
- You are using expressions like above without any problem, but domain of is , not .
- Your last sentence is very much against the ethos here: you don't own a section but you have to allow others to edit it. -- Jitse Niesen (talk) 10:56, 19 March 2007 (UTC)
- I am quite aware of this. I am happy when my contributions are improved and I am not happy when they are reverted.
- Well, normally composition is denoted like . In the quote of Chui, he uses which is not the same as .
- Surely I can. It is an operator's composition :). See, for example, Chui above. It is really very common definition. That's why I need separate section — to protect this definitions from numerical analysis students. Mir76 17:55, 18 March 2007 (UTC)
- 5. "It's like operators in functional analysis and algebra" — What do you mean with an operator in algebra?
- Ok, it was mistranslation from Russian. I'll try explain it in another words: Consider two operators: first one is multiplication of vector to square matrix, second one is taking a value of function in one fixed point (f(x) —> f(1)). They are both operators. But one of them has eigenvalues whilst other has not. Mir76 16:37, 19 March 2007 (UTC)
- 6.
So, I still think that there are two different notions here.-- Jitse Niesen (talk) 06:45, 18 March 2007 (UTC) I wanted to say: I still don't think that there are two different notions here. Sorry about the confusion. -- Jitse Niesen (talk) 02:46, 19 March 2007 (UTC)
- Mir76, there were several issues with your edit. Firstly, you changed the notation for central differences from δ to Δ even though this clashes with the notation for forward differences, but you didn't make this change throughout the whole article. Then you introduce your favorite notation, even though that doesn't seem to add to the article. Finally, you used your notation, instead of the notation used in the rest of the article, to define higher-order differences, writing
- where you should have written
- So there were plenty of reasons to revert and none of them has to do with any difference between mathematical and numerical analysis. -- Jitse Niesen (talk) 10:56, 19 March 2007 (UTC)
- Never have did this. I do not know who is 80.7.146.126, he is surely not me. What I have done — added line "Sometimes this difference is denoted " which is obviously right. After that I used this particular notation for defining higher order differences since there were no proper definition. And yes, I have made something which looks to you as an two symbols typo — omitted obvious argument. After that, instead of correcting a typo Alexandrov erased my notation and my definition of higher order differences. So I decided that he didn't recognized them (sorry for that assumption) and started another section with operator's point of view. 16:37, 19 March 2007 (UTC)
If there's a difference between the 'mathematical' and 'numerical analytic' uses of the term finite difference, I can't see it. It doesn't matter whether one views it in terms of the f.d. operator or not. JJL 13:01, 19 March 2007 (UTC)
- For me main difference is that one thing thing is an expression while other is an operator. Mir76 16:37, 19 March 2007 (UTC)
Summary
editWhat I had done:
1. Added notation .
2. Added (I do not need an argument f(x) here, it is like f(r(t)) = f(r)).
3. Added definition of difference as an operator, not expression (I strongly believe that operator is not expression, e.g. Fourier transform is not an expression).
4. Mentioned some pages where this operator is used.
I need all these four items (and they are not my fantasies, as you can see). I do not know how to incorporate this issues (especially #3) into existing article without breaking it. It is too much biased for numerical analysis now. So I started another section which is mathematical analysis oriented. I will be really happy if somebody will merge (not simply revert) this two sections in one. Or we can separate them to different articles. Mir76 16:37, 19 March 2007 (UTC)
- Nobody reverted your sections additions, here's the diff. I just removed the intro, which was incorrect, and renamed a section. All the other text you put in including the formulas did not change. So I had basically done the merge you are suggesting above. Oleg Alexandrov (talk) 03:02, 20 March 2007 (UTC)
- Ok, I have to acknowledge — at that time I haven't read the diff carefully, I only saw several added by me paragraphs were removed all together. I didn't like them, but they were needed to explain splitting of the article. Let state that we both have reverted each other once (March, 12 and March, 16).
- But this doesn't help much, since after your second edit article became really weird (see http://en.wiki.x.io/w/index.php?title=Finite_difference&oldid=115574544). There were two definitions of difference, two definitions of higher orders differences, two subsections concerning derivatives and no explanation why the article is split in such a way. And, more curious, finite difference is surely an operator, but finite difference operator is another thing, not equal to finite difference. So your new section title wasn't very good.
- I can not say it was merging. Mir76 08:57, 20 March 2007 (UTC)
- It was not at all a revert, and it did not have the "revert" word in the summary. Where did you get that from? I just removed the incorrect intro. Oleg Alexandrov (talk) 14:55, 20 March 2007 (UTC)
- Nobody reverted your sections additions, here's the diff. I just removed the intro, which was incorrect, and renamed a section. All the other text you put in including the formulas did not change. So I had basically done the merge you are suggesting above. Oleg Alexandrov (talk) 03:02, 20 March 2007 (UTC)
- In numerical analysis we use the operator notation to develop new finite difference formulas. A finite difference f(x+h)-f(x) is the result of applying a finite difference operator to a function. You'll find this throughout books on the num. sol. of PDEs, for example. To me, it's like distinguishing between saying that the derivative of x^2 is an operator that acts on x^2 to produce another function and saying that the derivative of x^2 is 2x, the result of having applied that operator. I use "derivative" for both of these closely related ideas. If you're trying to distinguish here between a finite difference operator and the result of that operation from the point-of-view that you could ask questions of the operator like "Is it linear and bounded, and what is its spectrum?" then it's true that those are different perspectives on it. But I don't see two corresponding entries for theoretical vs. applied derivative, integral, etc. So, I think that this isn't a distinction worth drawing here, and I definitely feel that num. analysts view it as an operator too...a very useful one! I'm inclined to revert. JJL 17:52, 19 March 2007 (UTC)
- "I'm inclined to revert" means erasing items 1-4 above? Mir76 08:57, 20 March 2007 (UTC)
- Or at least integrating them into the article. I still don't believe there's a distinction here that's important at the level of this entry. There are lots of things that can be viewed as operators. Yes, del and del(f) are not the same object, but they're the same idea and don't arise from/reside in wholly separate areas of math. I think they're two closely related aspects of one thing and not two different things. Where else is a specific operator given its own page/subpage, separate from what it does? JJL 12:41, 20 March 2007 (UTC)
- In my part of article operator is defined and clearly explained what does he do - he returns another function. In first part difference returns a number and it is also explicitly stated. Mir76 16:50, 21 March 2007 (UTC)
- Or at least integrating them into the article. I still don't believe there's a distinction here that's important at the level of this entry. There are lots of things that can be viewed as operators. Yes, del and del(f) are not the same object, but they're the same idea and don't arise from/reside in wholly separate areas of math. I think they're two closely related aspects of one thing and not two different things. Where else is a specific operator given its own page/subpage, separate from what it does? JJL 12:41, 20 March 2007 (UTC)
- "I'm inclined to revert" means erasing items 1-4 above? Mir76 08:57, 20 March 2007 (UTC)
- May be the main reason for me to create separate section was that I personally do not like definition
A finite difference is a mathematical expression of the form f(x+b) − f(x+a)
- since it is only half of the truth. I do not know how to insert operators there. I cannot see that derivative or integral are defined as an expression. Mir76 08:57, 20 March 2007 (UTC)
I give up. I can not seamlessly integrate those items into the numerical analysis part of article since I am not a specialist in that field, and would do more damage than profit. I have already tried without much success, maybe you can do it better. I have tried to make the introduction softer I ask not to remove it before real merge will be done. Mir76 16:50, 21 March 2007 (UTC)
Explain 'Generaliztions' notation?
editRight now I'm stealing Finite Differences for a problem, thanks to the contributors of this article. I found the article useful, helpful, understandable, and the discussion also interesting and helpful. I was especial gratified (proud) to see the quality and level of collaboration shown here in discussion. Well done. I think this collaboration illustrates that the voice and views of non-experts are an important aid to communication by experts. :)
I found a difficulty in the article, however, and hope it can be clarified. In viewing the article I found the notation in 'Generalizations' confusing where it said : . The rest of the article refers to the superscript on Delta as the 'order', and it would therefore appear the mu on Delta should be N here. [also n is used throughout instead of N, so perhaps n might be regarded as more consistent?] Thank You. —Preceding unsigned comment added by 76.191.132.120 (talk) 23:04, 23 January 2008 (UTC)
- Order of differantional operator depends on order of polynomial it annihilates. Classical finite differences of order n kills polinomials of order n-1. If you change the coefficients to it can kill or can kill not polynomials of particular degree — this depends upon . Here N is a length of coefficients vector .
- So notion of order is not applicable here and free superscript place was used for another (but having similar meaning) parameter. This should be incorporated into main article sometime... Mir76 (talk) 09:55, 12 March 2008 (UTC)
No historical perspective
editA reproach that I should make to many math entries: they seem exclusively aiming at people who already know the subject! Apart from mathematicians that do pure math for a living, there is a vast constituency of professional (millions) trained in other disciplines requiring a fair amount of math. Those "users" of math are often curious about math subjects beyond what they were taught. However, before they invest any time in additional math, they have to evaluate the pertinence of it to their core interests. How to do that if each and every subject is not presented with enough information regarding the "why" it came into existence and the "how" it was used so far. Of course, some very new subject may have little history and no immediate application, yet, most subjects deserving a wikipedia entry are not in this class. Indeed, they are in wikipedia very much because they represent something of value to many praticionners.
I suggest a brief history of the concept and a quick survey of the applications (in math and beyond) where it revealed itself as critical would do wonders for the natural readership of those articles.
- I couldn't have said it better myself. I stumbled upon this article through a tutorial on computer graphics dealing with Particle systems. The "Description" section of this article was written in mathematical jargon way beyond my comfort level; but in no way was I able to gain any insight into why central differences might be relevant to creating a Vector field. Not that I wanted someone to necesssarily hold my hand, but the "definition" cited here is purely abstract and does not present any practical applications that I might be able to relate to my particular field of study.
Symbol definitions
editWhat does the symbol "h" refer to in the formulae in this article? I can't find it defined in the article itself. It would be nice if it were clarified. Showeropera (talk) 18:05, 1 May 2020 (UTC)
A second (recurrent!) sore point is the matter of the local jargon. Mathematicians seemed to be the only species of scientific workers that assume their jargon (i.e their notation) does not need to be properly introduced. Sorry, if you want to be understood, you need to be precise about the conventions your are following. It happens that millions of practitioners in related fields use your beloved symbols in entirely different context, sometimes with the same meaning and sometime with a different meaning. Without an easily accessed link to the proper glossary and grammar, your brilliant demonstrations looks like as many tantalizing charades.
I suggest such math entries should always include, at the top, a link to another wiki page that would contain all the symbols used in the article and their meaning in that type of context. This glossary page could, of course, be used by many articles on related subjects, providing they do not depart from the same convention. By the way, this is what many other professions do with success (programmers among others).
I second this suggestion! Necmon (talk) 10:33, 28 October 2008 (UTC)
central difference
editi am citing this from the article:
"The main problem with the central difference method, however, is that oscillating functions can yield zero derivative. If h(nh)=1 for n uneven, and h(nh)=2 for n even, then f'(nh)=0 if it is calculated with the central difference scheme. This is particularly troublesome if the domain of f is discrete."
can someone clarify this (messy) example? bad expression prevents me from checking if the statement is true. also the statement suggests, that there are no such problems when using the one-sided difference. i'd like to see an example that's problematic with one but not the other, also i'm interested to know how a discrete domain of a (not-yet-discretized) function is reasonable in a case where one wants to approximate the derivative. 91.15.153.48 (talk) 13:45, 16 May 2010 (UTC)
I'm pretty sure that that example is just wrong. It seems perfectly reasonable to say that the derivative is 0 everywhere and that there is only a second derivative at grid points. — Preceding unsigned comment added by 137.22.170.42 (talk) 17:27, 4 June 2018 (UTC)
Many,(eg numerical analysts), consider definition of the central difference operator to be (f(x+h) - f(x-h))/2h and not (f(x+h/2) - f(x-h/2))/h CSDarrow (talk) 16:22, 30 March 2012 (UTC)
Multivariate example
editWhen it is first presented, the derivative of the univariate "f" is described as "f' = (f(x+h/2) - f(x-h/2))/h". In the multivariate example, it appears as "f_{x} = (f(x+h,y) - f(x-h,y))/2h" (someone compesates the "h" by dividing by "2h", seems ok...). However, for the formula of "f_{xx}" it is divided by "h^2". It seems to me that the "h" used for "f_{x}" is not the same as for "f_{xx}". The denominator for "f_{xx}" should be "4h^2" if "f_{xx}" was derived from "f_{x}" (as it seems from the context...). I am not a math person... and I may be even wrong... but I've got confused with that... Capagot (talk) 19:15, 17 September 2010 (UTC)
Accuracy
editI think it would be welcome to add some info on how these formulas are affected by floating point aritmetics, e.g. the effects of cancellation and the tradeoff between truncation error and rounding error for different step-sizes. — Preceding unsigned comment added by 130.237.43.79 (talk) 14:04, 11 July 2011 (UTC)
Example using the Fibonacci Sequence
editShouldn't the canonical first few numbers (0, 1, 1, 2...) be used in the Fibonacci example, and not (2, 2, 4) as currently done? Syd Barrett (talk) 21:49, 12 May 2013 (UTC)
Actually this is not the Fibonacci sequence. This is a mistake. — Preceding unsigned comment added by 24.34.193.55 (talk) 18:04, 17 June 2014 (UTC)
Starting with the definition, which appears hard to source as-is. The topic that can actually be found in numerous books is finite difference approximation (of derivatives). The word "approximation" is not gratuitous, because a (first-order) finite difference approximation of the derivative is indeed the difference quotient, and the symmetric difference quotient is the 2nd-order approximation of the (first) derivative etc. See [1] [2] [3] for instance. ¶ Going to the 1st section of this page, perhaps there are some sources [though none are cited inline on the page] which define the forward/backward/central different without division by h (or 2h), but those I've looked at [see links in previous sentence] don't do that. Also, this article is inconsistent with itself when it moves on to "2nd order central" [section] and so forth, where it defines them as including division by h2 [in the case of the "2nd order central"] etc. Speaking of which, the "2nd order central" is actually the difference quotient of the second symmetric derivative etc. ¶ So basically Wikipedia has two rather bad quality pages (this and difference quotient) on (more or less) the same topic [framed poorly] an no page on the real topic... There's a bit better material at numerical differentiation, on the basics at least, but it's not broad enough. Some1Redirects4You (talk) 12:47, 27 April 2015 (UTC)
- I am not convinced you have not misunderstood WP editorship. This is not a forum of critical discussion, but a hands-on proposed improvements venue. What, exactly, are you proposing to do to improve the article? "Unfunded mandates", asking someone else to fix this to your satisfaction are hardly constructive. I would disagree with you that this is an "off subject" article. It is distinctly not about finite differences as an approximation tool, and more pitched to umbral concepts and techniques. The introduction you are fussing about appears like a bag of definitions for the reader unfamiliar with the language. Why don't you try to rewrite difference quotient to the satisfaction of its editors, instead? Cuzkatzimhut (talk) 13:28, 27 April 2015 (UTC)
- I am talking about this article in the paragraph above. If you reread what I wrote carefully, you might see that. Some1Redirects4You (talk) 13:31, 27 April 2015 (UTC)
- (edit conflict) After looking a bit at the various references, it seems the presentation here bears some resemblance to the 1939 book of hu:Jordán Károly. Alas, that's a rather obscure book nowadays, which doesn't seem to have inspired treatment along that line in [m]any subsequent sources. Don't even get me started on the 1870 book of Boole being used as source here. Boole didn't even understand well enough the algebra that bears his name in order to use it a source for contemporary presentations, so I doubt this other books of his is much better. The calculus of finite differences (as conceived/presented in the works of Boole and Károly) doesn't seem to have much relevance today. So it should not influence the presentation outsider of that section, which might even be better to spin off to its own article owing to too numerous idiosyncrasies of little relevance except to math historians, perhaps. Some1Redirects4You (talk) 13:29, 27 April 2015 (UTC)
To summarize what I wrote above: to most [in not all] present-day sources finite difference[s] refers to finite difference approximations and not to the finite difference calculus of Boole & Károly. Some1Redirects4You (talk) 13:37, 27 April 2015 (UTC)
Gratuitous distracting Isaac Newton template
editUser:TonyTheTiger reinserted a gratuitous, indeed, highly inappropriate template on Isaac Newton at the bottom of the article, which I had removed, purporting not to be convinced by my explanation that the article is not a shrine to Newton. Well, it isn't! Newton's interpolation formula is properly showcased in its own section, but this technical article is not about Newton. George Boole has much more to do with it, or the umbral mathematicians mentioned---and besides, this is not a history-heavy venue.
Sticking in handles on irrelevant hero-worhip information degrades the quality of the article. The same knee-jerk abuse has been inflicted on Table of Newtonian series , Newton polynomial, etc... It looks like anything with Newton's name in it has been contaminated this way. I fear the inserter may well not care to read and understand the article in the first place. I leave it to somebody else to remove it this time, but ... Behold! Cuzkatzimhut (talk) 10:35, 15 October 2015 (UTC)
- Cuzkatzimhut, what kind of hero worship do you suggest. I barely know Newton. I only have a masters in stats. I have no higher hard core degrees and have not studied physics since high school (over 30 years ago). I am just doing the template because Newton is a WP:VA subject without a template. I asked for help creating the template on a half dozen project talk pages. Since no one would step up, I did the template myself. This is a basic navbox and unless this article is malplaced on the navbox, it seems that this page is an appropriate location for the template.--TonyTheTiger (T / C / WP:FOUR / WP:CHICAGO / WP:WAWARD) 19:46, 16 October 2015 (UTC)
As I indicated, this is not an article about Newton's work, which is, of course, represented in a section of it. In that sense, it may well be placed in the navbox.
However, I don't believe this is an excuse to throw sundry unrelated Newtoniana at the hapless reader who comes to this article to learn about the calculus of finite differences and possibly the umbral marvels underlying them. As I indicated, seeing the Newton template here one might get the unsound impression that, somehow, this is about Newton's life work and not about math, so, yet another exhibit in Newton's shrine. The template is fine in Newton's wiki, or other ones, but wildly distracting here. I appeal to the common sense of the other page watchers to remove it. If you really wished to be constructive, you might initiate merges of the Table of Newtonian series etc... into this article. Cuzkatzimhut (talk) 20:11, 16 October 2015 (UTC)
Possible Mistake
editHere is The relationship of these higher-order differences with the respective derivatives is straightforward,
Since the , so in the expression the plus symbol should be replaced to minus, or derivative and differnce operators places to be changed inversible. (Just my oppinion, could be wrong) --KolosovP (talk) 18:23, 4 August 2016 (UTC)
- You appear confused about the meaning of the Big O notation. This item should have been placed at the bottom of the page as per talk page instructions.Cuzkatzimhut (talk) 16:11, 16 August 2016 (UTC)
The big O notation is limiting of the "greater" function above "smaller", so if g(x)<k(x) means k(x)=g(x)+O(x), right? The derivative in particular point always smaller then difference, its shown before in the article, so the difference no have limiting behavior by derivative or im wrong? --KolosovP (talk) 05:24, 17 August 2016 (UTC)
- Wrong. The definition explicitly emphasizes absolute values, not signs. Please read the article and its illustrations. Cuzkatzimhut (talk) 11:26, 17 August 2016 (UTC)
- Even that, the aboslute value of derivative taken by some set lesser then abs of differnece, so big O could be negative value ? --KolosovP (talk) 14:26, 17 August 2016 (UTC)
I am not sure you appreciate the asymptotic notation: do you agree that f(x)=O(g(x)) and −f(x)=O(g(x)) mean exactly the same thing? Cuzkatzimhut (talk) 16:00, 17 August 2016 (UTC)
- Moved to bottom of the Talk page as per WP rules
What is being bounded is the differences , etc, so no signage is meaningful. Cuzkatzimhut (talk) 16:06, 17 August 2016 (UTC)
External links modified
editHello fellow Wikipedians,
I have just modified one external link on Finite difference. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
- Added archive https://web.archive.org/web/20090419132601/http://www.stanford.edu:80/~dgleich/publications/finite-calculus.pdf to http://www.stanford.edu/~dgleich/publications/finite-calculus.pdf
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—InternetArchiveBot (Report bug) 19:56, 31 December 2016 (UTC)
Upper summation limit missing?
editIn the section "Newton's series", in the last formula, at the leftmost summation sign, should not an upper summation bound be mentioned there? I suspect an infinity sign is missing. Is that correct?Redav (talk) 08:40, 17 May 2020 (UTC)
More accurate? Or rather of higher order in h?
editIn the article I read: "However, the central (also called centered) difference yields a more accurate approximation."
In my view, and as explained after the cited text, given a certain mesh size , a central difference is not necessarily more accurate than a forward or backward difference.
What seems true is that the rate of convergence to the actual derivative in the limit for small is faster.
The confusion seems to be as if a higher convergence rate were synonymous with higher accuracy for a fixed value of . Does anyone disagree?Redav (talk) 10:32, 24 May 2020 (UTC)
India Education Program course assignment
editThis article was the subject of an educational assignment at College of Engineering, Pune supported by Wikipedia Ambassadors through the India Education Program. Further details are available on the course page.
The above message was substituted from {{IEP assignment}}
by PrimeBOT (talk) on 19:53, 1 February 2023 (UTC)
inconsistency in higher order section
editThere seems to be missing denominator h terms in the higher order derivative section. It would be nice to have a citation here to look it up, but I'd guess it is h^n. 018 (talk) 23:44, 12 November 2023 (UTC)
Formula in Multivariate Finite Difference
editWhile I have not attempted to prove this through Taylor expansions, the formula below appears to be incorrect. I say this because I have implemented both this and the other partial derivative formula in a code I've written. Since this article contains two formula for the same second-order partial derivative, it is easy to verify that the two formula give very different results. Moreover, I can reconstruct an approximation of the initial function through a two-dimensional Taylor expansion. However, when I do this using the formula below, the Taylor expansion clearly deviates from the initial function. The other formula in the article appears to work without issue. This comparison was made using the same f(), h, and k.