Wikipedia:Reference desk/Archives/Mathematics/2009 November 29

Mathematics desk
< November 28 << Oct | November | Dec >> November 30 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


November 29

edit

Hi all,

I was wondering whether anyone could clarify, in the Euler-Lagrange Equation article, under the [[1]] section:

for the multivariable E-L equation given, (specifically I'm looking at the sum for n=2, functions in 2 variables x and t), exactly what is being kept constant in each partial derivative: for example (and particularly important), in the leftmost partial derivative of the second term (if that makes sense) - i.e. the   before the  : is that partial derivative taken with respect to, (I'll use x_1=x, x_2=t from now on), say, x keeping just t fixed, or x keeping t, f, ft and fx fixed? (And vice-versa for t keeping x fixed, or t keeping "..." fixed etc). I'd look for clarification elsewhere but I can't seem to find the formula on any other sites, and without a derivation it isn't particularly clear to me which derivatives should be fixing which variables - thanks! 131.111.8.96 (talk) 16:42, 29 November 2009 (UTC)[reply]

In fact the notation is a bit ambiguous. Say your Lagrangian is a function of 5 variables L. Your functional is  : 
defined for certain functions  . In the EL equations, the terms     and   denote respectively just the partial derivatives of L wrto the third, fourth and fifth variables: that is, the variables that are occupied respectively by     and   in the expression of the action functional. With a simpler and more clear notation one would denote them just     and   these partial derivatives only depend on L, not on the function f. Then, you compute these three functions in a 5-ple   (and these of course depend on t,x, and on a function f of (t,x) ). You further compute the derivative   of the composition   wrto the variable t, and the derivative   of the composition   wrto the variable x, and subtract. Then,   is a function of t and x which is identically 0 iff f is a solution of the EL equations. Note that   and   would produce 4 + 4 terms if you expand them. Everything should be clear if you look at the proof of the computation; in particular note that the derivatives    , and   come from a differentiation under the sign of integral, and are just the partial derivatives of L; the derivatives   and   come from integrating by parts, so they are really derivatives of the composition with   and  . --pma (talk) 00:28, 30 November 2009 (UTC)[reply]

Differential equations

edit

Suppose we are given the differential equation f(x)dx + g(y)dy = 0. We can now take the antiderivative to conclude that F(x) + G(y) = c where F' = f and G' = g. My question is what validates our taking the antiderivative in this fashion? The way I validate taking antiderivatives is that I consider the derivative as a 1-1 function from the set of differentiable functions(each function I understand represents a class of functions which differ by constants) to another set of functions. Then I see the antiderivative as the inverse function. Thus taking antiderivatives wrt x should be applied throughout the equation and not merely on the f(x) term. Hence this should yield F + xg = c. How can we just choose to take antiderivative w.r.t. x for f and the antiderivative w.r.t. y for g? Thanks-Shahab (talk) 17:02, 29 November 2009 (UTC)[reply]

I am not sure to understand your question precisely, but the article separation of variables formally explains a similar situation. Hope that helps. Pallida  Mors 22:29, 29 November 2009 (UTC)[reply]
Details: you should first clarify what is the meaning of the equation, that is, what is the unknown and what's your request on it. Say f and g are continuous. I assume you are looking for a function y: I → R, and that the equation means that you want f(x) + g(y(x)) y'(x)=0 for all x in I. With your notations, this also writes F'(x)+G'(y(x)) y'(x)=0, or [F(x)+G(y(x))]'=0, that is also equivalent to saying that F(x)+G(y(x)) is a constant. If g>0 (or <0) then G is invertible and for any c you have a solution y(x)=G-1(c-F(x)), defined on some domain I. --pma (talk) 23:04, 29 November 2009 (UTC)[reply]
I understood your post pma but unfortunately my doubt remains. Consider a differential equation of this type: dx + 2ydy - 2dz = 0. I have seen a book which just states integrating we have x + y2 - 2z = c. My question is if the antiderivative is being taken shouldn't it be taken wrt x for all the terms, or wrt y for all the terms or wrt z with all the terms. How can we just choose taking antiderivative wrt x for the first term, wrt y for the second term and wrt z for the third. If the antiderivative wrt x is an operator shouldn't it be applied simultaneously over all terms? Secondly typically what is the unknown here? Thanks-Shahab (talk) 04:16, 30 November 2009 (UTC)[reply]
Note that you are NOT taking an anti-derivative w.r.t x for the first term, y for the second term etc. To understand what is happening you need to think of your equation like so: rewrite your equation as dx/dt + 2ydy/dt - 2dz/dt = 0, now when you integrate you are integrating w.r.t. t in all cases. You see what is happening now? those loose-hanging dx's, dy's and dz's are actually meaningless quantities, but we treat ("abuse") them as differentials. I'm not a full-on mathematician but I think your answer may lie somewhere at implicit function or differential (mathematics). Also remember that because of the fundamental theorem of calculus we can "get away with" a certain amount of hand-waving, second-guessing, intuitive leaps and "incorrect" manipulation and treatment of differentials when it comes to solving differential equations, as long as we can show that the final solution satisfies the original problem statement. Zunaidfor your great great grand-daughter 09:13, 30 November 2009 (UTC)[reply]


(ec) Well, your doubt is reasonable, because "dx + 2ydy - 2dz = 0", in the lack of a convention to interpret it should sound like a question without question mark (like the ones that usually anonymous questioners leave here much to our satisfaction). In their customary notation, dx, dy, and dz denote the elements of the standard basis of the dual space of R3, that is (R3)*. So, dx is the the first coordinate form, assigning to any vector (x,y,z) the number x, and similar meaning have dy and dz. An expression like ω(x,y,z)= a(x,y,z)dx+b(x,y,z)dy+c(x,y,z)dz, where, say, a,b,c are three R-valued functions defined on some open set Ω of R3, represents a differential 1-form, that is, a map ω: Ω→(R3)*. Thus the equation a(x,y,z)dx+b(x,y,z)dy+c(x,y,z)dz=0 primarily represents a distribution of planes on Ω, meaning that for all (x,y,z) in Ω you consider the kernel of the linear form ω(x,y,z), which is a certain 2-plane V(x,y,z), unless ω vanishes in the point (x,y,z): let explicitly assume it is never the case here. The natural question that a distribution poses is: is it integrable, that is, is there (in the present case) a family of surfaces filling Ω, such that the surface passing for (x,y,z) (the "leaf") has V(x,y,z) as tangent? As you see this generalizes the ODE problem of integrating a distribution of lines (in that case you'd look for a family of curves whose tangents are the lines of the distribution). Here, if you choose to represent the surfaces as level set of a function f(x,y,z), which turns out to be possible at least locally, the problems translates into a system of linear first order PDE's: ∂f/∂x=a, ∂f/∂y=b, ∂f/∂z=c; that is, find a function f(x,y,z) such that df=ω; in one word, find a primitive of ω. A primitive of the differential form dx + 2ydy - 2dz on R3 is the function f(x,y,z):=x + y2 - 2z, and all primitives differ for a constant. Note that since after all Ω here is a domain in R3 you may choose to represent linear forms by scalar product with vectors, that is, to identify (R3)* with R3. Then you would have a vector field F in Ω instead of a differential 1-form, and one would write the problem for f as grad f(x,y,z)=F(x,y,z), and call f a potential function of F instead of a primitive of ω; if such f exists (on the domain Ω) one calls F conservative and ω exact (on Ω). Note that you need a compatibility condition for f to exist even locally.
Finally, assuming df(x0,y 0,z0)≠0, there is a partial derivative of f that do not vanish in (x0,y0,z0) say   This allows you to describe the level set at c=f(x0,y0,z0) as a graph of a function, z(x,y) satisfying f(x,y,z(y,z))=c and whose existence is guaranteed by the implicit function theorem. --pma (talk) 11:01, 30 November 2009 (UTC)[reply]
Thank you pma. Your comments are of great value to me. I do not have a solid background in differential geometry and hence want to ask a few questions. Firstly you said, So, dx is the the first coordinate form, assigning to any vector (x,y,z) the number x, and similar meaning have dy and dz. Can you explain this please. Secondly how can we reconcile this defintion of dx with the idea of it being an infinitesimal change in x. Thirdly what is the formal definition of a primitive of a differential 1- form in the case when we are in R^n. Again, thanks-Shahab (talk) 11:00, 30 November 2009 (UTC)[reply]
As to infinitesimals and differentials, you should probably start with the definition of Fréchet differential, say of a function f: Ω⊆RmRn at a point a=(a1,..,am). As maybe you know, differentiability at the point a means existence of a certain linear map L:RmRn, denoted Df(a) or df(a), that gives us the first order expansion of f at the point a. This means that for all increment h (such that a+h is still in the domain of f), if we compute f(a+h) we get f(a+h)=f(a)+Lh+o(h). In principle, there is no infinitesimal in all that, but you may think h as an "infinitesimal" variation of the variable a, and Lh as the corresponding infinitesimal variation of f, for the reason that the approximation f(a+h)≈f(a)+Lh is more and more good as h is taken smaller and smaller. The language of maps allows to describe all this without need of introducing infinitesimals quantities (but of course one might do it with infinitesimals, even formally). In any case, whatever is the language we use to describe differentiability, the great underlying idea is the linearity of small increments. You may imagine f as a complicated nonlinear process, producing an effect f(a) under a cause a. Then, physically, the assumption of differentiability is a superposition principle for the response of your system under small increments of the cause. This fundamental law was recognized as starting point to understanding complex nonlinear phenomena: those that at least at a microscopic level behave reasonably. An important historical example I think is Hooke's law of elasticity ut tensio, sic vis, thanks to which, for instance, even a very complicated mechanical system slightly displaced from a state of stable equilibrium behaves in a very tame and predictable way. On the same line of thoughts, a tangent vector at a point p of a manifold M formally has nothing to do with infinitesimals, but you may imagine it as an infinitesimal variation of the point p, and you may think the tangent space TpM at p as a microscopical portion of manifold around p, with the shape of a vector space (so small that it does not meet the other tangent spaces: say that it covers exactly the point p ;-) ) . Going back to the example, the differential of f(x,y,z):=x + y2 - 2z at any point (x,y,z) is the mentioned linear form R3R because indeed f(x+h,y+k,z+l)=f(x,y,z) + ( h+2yk-2l) + o(h,k,l) as you can easily find expanding: the term into parentheses is exactly the linear form dx + 2ydy - 2dz computed in the vector (h,k,l) if you interpret dx, dy, dz as said above. AS to the last question: it is pretty much the same for a differential 1-form on an open domain Ω of Rn, or even more generally, of a Banach space E. A differential 1-form is a map ω:Ω→E*. The differential of a differentiable map f :Ω→R is in particular a differentiable 1-form. If ω is the differential of f we also say that f is the primitive of ω; we say that ω is exact if such a primitive exists, that is, if it is a differential of some map. Since the second differential D2f of any map f, whenever it exists at a point a, is always a symmetric bilinear map, a necessary condition for ω being exact is that Dω is symmetric at any point as a 2-form (i.e., ω is a closed 1-form) . This turns out to be a sufficient condition locally. --pma (talk) 13:34, 30 November 2009 (UTC)[reply]

Ratio Test

edit

Does the ratio test only work for series and not sequences? If so, why not? Thanks 131.111.216.150 (talk) 19:38, 29 November 2009 (UTC)[reply]

I guess it sort of does work for sequences, although not for the same reason as for series. For series, it works because the series will behave nearly like a geometric series if the ratio has a limit. For sequences, it works because if the limit of the ratio of subsequent terms is less then 1, then the terms will approach 0, and if it's greater than 1, the terms diverge (so the "ratio test" for sequences is exactly identical, except you actually know what the limit is in the case that it converges). There doesn't really seem to be much point in applying that rule to sequences though; it's usually easier to just directly observe that the terms are going to 0. --COVIZAPIBETEFOKY (talk) 21:14, 29 November 2009 (UTC)[reply]
You missed the case of the ratio tending to exactly 1. In that case the sequence converges (I think - I can't remember and I've only proved it non-rigorously in my head just now) and you have no idea what to. --Tango (talk) 21:39, 29 November 2009 (UTC)[reply]
No, that's not true. I'm sure there's a simpler counterexample, but off the top of my head, the limit of the ratios of successive terms in the sequence   is 1, but the sequence is divergent. --COVIZAPIBETEFOKY (talk) 22:14, 29 November 2009 (UTC)[reply]
Simpler counterexample:  . In fact, any polynomial will work. --COVIZAPIBETEFOKY (talk) 22:24, 29 November 2009 (UTC)[reply]
"Why a screwdriver works with screws and not with nails?". --pma (talk) 23:12, 29 November 2009 (UTC)[reply]

If given a real-valued sequence in which the consecutive ratios converge to a real number with absolute value less than one, the sequence will converge to zero (why?). With series, the purpose of the ratio test is to determine convergence (or divergence) of a given series. With sequences, the ratio test only determines convergence to zero. Since the theory of sequences converging to an element x of a topological group, is equivalent to that of those converging to any y in the group (one can assume the group to be the complex numbers or the real numbers), the ratio test does not yield any special information about sequences, althogh it does for series. --PST 13:55, 30 November 2009 (UTC)[reply]

Riemann zeta function

edit

The page about the Riemann hypothesis says that the negative even integers are trivial zeroes of the Riemann zeta function. However, plugging in -2, for example, produces the diverging sequence 1+4+9+16+..., which is clearly not 0. --76.211.91.170 (talk) 19:41, 29 November 2009 (UTC)[reply]

As mentioned in Riemann zeta function, the formula   only works when the real part of s is greater than 1. For other values you need to use Analytic continuation. -- Meni Rosenfeld (talk) 19:54, 29 November 2009 (UTC)[reply]

Rotation group of the dodecagon

edit

Let φ: ℤ → G, k ↦ rotation by k · 30°, where G is the rotation group of the dodecagon. How to prove the following theorem: “φ is a homomorphism from (ℤ, +) to (G, ∘)”? --84.62.199.19 (talk) 20:29, 29 November 2009 (UTC)[reply]

How to prove the following theorems: “φ is surjective” and “G ≅ ℤ/12ℤ”? --84.62.199.19 (talk) 20:34, 29 November 2009 (UTC)[reply]

For the first you shouldn't need to look much further than the definition of a group homomorphism and actually verify that the requisite properties hold -- this should not be difficult if you understand the group operations in   and  .
For the second question you need to contemplate the First Isomorphism Theorem and the definition of surjective (where again simply explicitly showing that   meets the requirements should be easy). -- Leland McInnes (talk) 20:55, 29 November 2009 (UTC)[reply]
Do you really NEED the First Isomorphism Theorem? Maybe that's the quickest way to do it, but it seems exaggerated to say you NEED to do it that way. Michael Hardy (talk) 02:48, 30 November 2009 (UTC)[reply]

Please give a proof of the three theorems here! --84.62.199.19 (talk) 20:57, 29 November 2009 (UTC)[reply]

We're not going to do it for you, you won't learn anything that way. You really do just need to apply the definitions of homomorphism and surjection and then apply the First Isomorphism theorem. There is nothing more to it. --Tango (talk) 21:27, 29 November 2009 (UTC)[reply]

Where can I find proofs of the three theorems here? --84.62.199.19 (talk) 22:20, 29 November 2009 (UTC)[reply]

No first you tell us where you got the ℤ then we tell you the proofs --pma (talk) 22:41, 29 November 2009 (UTC)[reply]
It's U+2124. --Tango (talk) 22:50, 29 November 2009 (UTC)[reply]
I was joking, but thanks for the nice link--pma (talk) 19:41, 30 November 2009 (UTC)[reply]
Maybe start with the right attitude. You come here asking for help and information; while answering a request for help and information in this way?! ~~ Dr Dec (Talk) ~~ 22:28, 29 November 2009 (UTC)[reply]
Prove them yourself. Once you have looked up the definitions and the FIT it will take you a few minutes. You won't find the proofs anywhere unless this example happens to be used in some textbook. --Tango (talk) 22:30, 29 November 2009 (UTC)[reply]

What is φ-(H), where H is the rotation group of the square? --84.62.199.19 (talk) 23:29, 29 November 2009 (UTC)[reply]

Do you mean φ-1(H)? If so, it's 3ℤ. You should be able to prove it yourself by just calculating φ(3ℤ). --Tango (talk) 23:47, 29 November 2009 (UTC)[reply]

Does this seem familiar? --PST 06:59, 30 November 2009 (UTC)[reply]

While we would like to help you, you should remember that mathematics is not simply a set of answers to a set of questions. The questions you have asked should be solvable after under a few minutes of thought at the most. If you are unable to solve the quesions, that is fine; however, in this case, you should seek hints or ideas which may aid you. Receiving the full solution will do nothing but increase your grade in whatever course you are taking (if you are taking one); this should not be your primary motivation. --PST 07:06, 30 November 2009 (UTC)[reply]

  • It seems that the unicode characters are not universal. I saw the character ℤ as a perfect little   while using Google Chrome. I've just opened the page in Firefox and the character ℤ looks like it's been written by a drunken infant. Most disappointing! ~~ Dr Dec (Talk) ~~ 11:34, 1 December 2009 (UTC)[reply]
This has nothing to do with Firefox, the shape only depends on your fonts. Tell Firefox to use the same font as Chrome, and you're fine. — Emil J. 12:08, 1 December 2009 (UTC)[reply]
Really? Cool. So... how might one tell Firefox to do that? ~~ Dr Dec (Talk) ~~ 12:10, 1 December 2009 (UTC)[reply]
Tools -> Options -> Content. --Tango (talk) 12:21, 1 December 2009 (UTC)[reply]
For me it's Edit -> Preferences -> Content. In other words, the answer is version-dependent, but it should be the Content tab in whatever place you normally use to make other settings. — Emil J. 12:29, 1 December 2009 (UTC)[reply]
I've made another comment on my talk page. Please click here. ~~ Dr Dec (Talk) ~~ 12:50, 1 December 2009 (UTC)[reply]