Wikipedia:Reference desk/Archives/Mathematics/2009 March 20

Mathematics desk
< March 19 << Feb | March | Apr >> March 21 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


March 20

edit

First and Second derivative

edit

 : I need help finding the first and second derivatives. Im confused with the product and chain rule. —Preceding unsigned comment added by Dew555 (talkcontribs) 02:42, 20 March 2009 (UTC)[reply]

If you show what you've done so far, we can see where you may be going wrong. It's policy not to just give answers. However, you could start with the simpler case of  , both by the product rule and, to check, by simplifying first to get the overall power of x. If that's OK, what's the derivative of  , using the chain rule? Then put the parts together. →86.132.238.145 (talk) 10:12, 20 March 2009 (UTC)[reply]
(Explicit answer by User:Superwj5 removed --131.114.72.215 (talk) 10:18, 20 March 2009 (UTC))[reply]
These people will help you if you show you have made some effort to solve your problem. This way they can understand better where is the point where you met difficulties. So please try answering the following:
1. Are you able to apply the product (Leibnitz) formula for the first and second derivative of a function   in terms of  and   ? What do you get?
Now put  .
2. Are you able to write   and   for this function? What do you get?
3. Are you able to substitute the result of point 2 into the formula of point 1? What do you get?
--131.114.72.215 (talk) 10:14, 20 March 2009 (UTC)[reply]
(edit conlict *3) If you are having difficulty with this problem, then try starting with some easier problems. For example, try differentiating the following functions. If you get stuck, let us know where you get stuck:
  1.   (Requires power rule.)
  2.   (Requires either product rule or power rule.)
  3.   (Requires power rule and derivatives of sums.)
  4.   (Requires product rule.)
  5.   (Requires product rule twice.)
  6.   (Requires chain rule.)
  7.   (Requires using the chain rule twice.)
  8.   (...and three times.)
After you understand how to apply the product rule and chain rule separately, then try your original problem.
Maybe some other ref-desk-ers can offer help explaining...? Eric. 131.215.158.184 (talk) 10:17, 20 March 2009 (UTC)[reply]

Polynomial Roots

edit

Is there a formula or method to find the roots of any polynomial? And if yes, what?The Successor of Physics 09:50, 20 March 2009 (UTC)[reply]

there are formulas that give you the roots of quadratics, cubics and quartics, and I've no doubt we have articles on them. For the quintic and higher it's a theorem (see Galois theory) that there's no formula that only involves ordinary arithmetic operations and nth roots. But you can solve the quintic with the ultraradical and there are probably higher degree analogues. Of course if an approximation is good enough, you can solve any polynomial with numerical methods...129.67.186.200 (talk) 10:15, 20 March 2009 (UTC)[reply]
The articles in question are Quadratic equation#Quadratic formula, Cubic function#General formula of roots.5B8.5D and Quartic function#The general case, along Ferrari's lines. Note that the last two of these are extremely rarely used in practice, because the roots are almost always simpler when written "the roots of polynomial so-and-so" than when written as whatever comes out of these formulas. And if you need the roots for an application, you'll need to compute an approximation sooner or later anyway, and I wouldn't be surprised if it would often take more computer power to calculate all these radicals to the needed precision than to just go for the roots of the polynomial directly with something like Newton's method. The search for these formulas make an interesting piece of mathematics history though. —JAOTC 11:31, 20 March 2009 (UTC)[reply]
Indeed. Approximating a radical is no easier than approximating a root of a general polynomial, so if you are going to calculate the root then the radical expressions are useless, it is faster and more precise to approximate the root directly. — Emil J. 13:25, 20 March 2009 (UTC)[reply]
To third this opinion: yes, it takes more computation to evaluate an expression in radicals (correctly) numerically than it does to find the root. One major problem is many of these expressions suffer from catastrophic cancellation, so that to get k-bits of accuracy in the final answer, you may need nk-bits of accuracy approximating the radical, where I think it can be very difficult to bound n above. In other words, even with an oracle that spit out the expression in radicals for free, it might still take much much longer than directly applying a root finding technique, especially if the roots are real and not clustered. This is similar to matrix inversion. Even if someone offers you the inverse matrix for free, multiplication by it (correctly) might be more expensive than using a standard linear solver. JackSchmidt (talk) 19:00, 20 March 2009 (UTC)[reply]
See solvable group and Abel's impossibility theorem. --PST 02:19, 21 March 2009 (UTC)[reply]
Hey, let me tell you something: I HATE APPROXIMATIONS!!!!!!!!!!!!!!!!!!!!The Successor of Physics 17:17, 21 March 2009 (UTC)[reply]
Unless the polynomial can be solved in radicals (ie. the polynomial group is solvable), approximations are pretty much all you can have (either that, or really weird notation that doesn't help you much). Even when there is a solution in radicals, it may be extremely complicated, so doesn't help much anyway. In many situations it's best to just not solve the polynomial at all and just work with it as it is, when you do need to solve it approximations are often best. --Tango (talk) 17:44, 21 March 2009 (UTC)[reply]
And the radicals don't help you get any more than approximations if you want the answers to be expressed numerically. So you're no worse off for not being able to use radicals. Michael Hardy (talk) 19:26, 21 March 2009 (UTC)[reply]
Well, yes, "numerical" essentially implies "approximate" (unless it happens to be a rational number with a small denominator). --Tango (talk) 23:54, 21 March 2009 (UTC)[reply]

Good question! While there is not a "formula" in the usual sense, one can nevertheless operate on algebraic roots in a fairly arbitrary manner. For instance one can form polynomials in any finite number of roots of polynomials, by a result due to Leopold Kronecker. There are more modern algorithms, although I do not remember the source. Write to me if you are really interested, and I will try to dig them out. Bounding the roots is more straightforward. To the best of my knowledge, an initial bisection followed by Newton's method generally suffices to isolate the roots. 71.182.216.55 (talk) 02:51, 22 March 2009 (UTC)[reply]

You can also use the Sturm sequence, I think. 75.62.6.87 (talk) 10:38, 25 March 2009 (UTC)[reply]
Thanks!The Successor of Physics 11:30, 26 March 2009 (UTC)[reply]

Question of Cubes

edit

I am trying to prove that 13 ± 2 3 ± ... ± n3=0 for all n of the form 4k-1 or 4k.

To do this, I need what I call start points for the proof. These are n = 11, 12, 15, 16, 19, 20, 23, 24. Once I have these numbers I can easily show that any 16 consecutive cubes can be made equal 0 by a particular arrangement of signs. However, proving the start points cases is something that eludes me. Any suggestions as to the choice of sign in each case for the following expressions:

13 ± 23 ± ... ± 113

13 ±2 3 ± ... ± 123

13 ± 23 ± ... ± 153

13 ± 23 ± ... ± 163

13 ± 23 ± ... ± 193

13 ± 23 ± ... ± 203

13 ± 23 ± ... ± 233

13 ± 23 ± ... ± 243,

in order to make each of them 0? —Preceding unsigned comment added by 143.117.143.13 (talk) 12:34, 20 March 2009 (UTC)[reply]

This looks like a perfect task for a computer: finite, numerical, and boring. Algebraist 13:27, 20 March 2009 (UTC)[reply]
Yes, just looking at the last case, there are 23 signs which can either + or -. That's 223 possibilities, or some 8.4 million. That's well beyond what a person can do, but easy for a computer to handle. StuRat (talk) 14:56, 20 March 2009 (UTC)[reply]
Come on, people. How did mathematicians ever get anything done before there were computers? I found one solution for n=12 and think there are no solutions for n=11.
 .
I promiss to find another one if the original poster explains how his general solution works. Dauto (talk) 18:24, 20 March 2009 (UTC)[reply]
Before computers mathematicians had to do a lot more tedious work. I for one am not masochistic enough to go back to those days without good reason. Algebraist 18:33, 20 March 2009 (UTC)[reply]
And those cases for n=11 and n=12 are far simpler. The n=11 case only has 1024 possibilities to try, which, while tedious, is at least possible to do by hand. StuRat (talk) 18:40, 20 March 2009 (UTC)[reply]
As for the general solution, it works similarly to the case 12 ± 22 ± … ± n2 = 0 which was answered on this desk recently (but I didn't succeed in locating it in the archives): for every n, the sequence of cubes n3, …, (n + 15)3 with signs +−−+−++−−++−+−−+ sums to zero. — Emil J. 18:37, 20 March 2009 (UTC)[reply]
For those of you saying it's impossible to do it by hand, I got a solution for n=24, and I used only a hand held calculator
 
Dauto (talk) 20:31, 20 March 2009 (UTC)[reply]
So how did you do it ? Trying every combo obviously isn't the answer. StuRat (talk) 21:48, 20 March 2009 (UTC)[reply]
No, trying every combination would take too long. First, the number of solutions probabily grows quite fast as well and all we need is one solution for each value of n. Second (and more important) there are some heuristics that can speed things up considerably. During the seach, anytime that a partial sum (or subtraction) exceeds the sum of the remaining terms (the ones that were not yet included in the partial sum) we simply skip that branch of the search since no combination of signs of the remaining terms will be able to cancel the partial sum. That saves an incredible amount of time since the terms distribution is quite skewed, as long as we start the search by fixing the signs of the larger terms first and work our way down to the smaller terms. That alone allows the solution for n=12 to be found in just a few minutes. Dauto (talk) 04:49, 21 March 2009 (UTC)[reply]


Yes how did you do? Well, some nights ago I was asleep and made some of these computation on my computer, just out of curiosity... so: no solutions for n<12, and only two for n=12: one is Dauto's, the other is it with inverted signs. For the cases n=15 and n=16, we have solutions for free with the 16 signs written by EmilJ, because we can start the sequence either with 0 or 1. To make it short, it came out that there are solutions for any  . I did not made the program write them though, just the number of solutions. The day after I remembered of the Sloane's OEIS, and made a search with the sequences of the number of solutions relative to sums of squares and cubes: there was neither, so I sent them (after all, it contains such sequences like a(n)=100n+1, so it may well have these ones too). However, the number of solution to   was there: [1], with some interesting links. One is to S.R.Finch page, where he finds in a quick and nice way the asymptotics for the number of solutions, by means of the central limit theorem; it seems that the computation extends to the case of sums of r-powers: luckily, giving the same result one gets by the expansion of the integrals.
By the way, note that the 16 signs in the sequence above are those of the expansion of the polynomial  . Since   divides  , if S is the shift operator, acting on sequences by  , then the linear operator   contains the fourth discrete difference operator   as a factor: and this is the reason why it killes the sequence of cubes, and in fact, all polynomial sequences of degree <4. Of course, this argument generalizes for any degree r, so we have solutions to   for all   and  . It seems difficult to say what's the least n=n(r) for which there is a solution; it could be much less than the bound  .... --pma (talk) 21:54, 20 March 2009 (UTC)[reply]

Here's some Python code to solve this using dynamic programming:

numzerosumcubes = {0:1}
for i in range(1,25):
    replacement = {}
    for sum,frequency in numzerosumcubes.iteritems():
        replacement[sum - i**3] = replacement.get(sum - i**3,0) + frequency
        replacement[sum + i**3] = replacement.get(sum + i**3,0) + frequency
    numzerosumcubes = replacement
    print "Number of solutions for n =",i,": ", numzerosumcubes.get(0,0)

It finds solutions for all n congruent to 0 or -1 mod 4, starting at 15, and outputs the number of solutions for each n. —David Eppstein (talk) 22:11, 20 March 2009 (UTC)[reply]

Intrinsic Equations

edit

Given the intrinsic equation of a curve  , how do you find the Cartesian form of the equation? 92.3.124.220 (talk) 22:22, 20 March 2009 (UTC)[reply]

Here   is the angle between the tangent to the (planar) curve and the x-axis, I suppose. Write it as  . Then you shold find a cartesian parametrization by arc-length ( x(s),y(s) ), where x(s) and y(s) come from the two differential equations   and  . Check here also. --84.221.81.199 (talk) 09:58, 21 March 2009 (UTC)[reply]