Wikipedia:Reference desk/Archives/Mathematics/2009 November 27

Mathematics desk
< November 26 << Oct | November | Dec >> November 28 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


November 27

edit

Constructing a parabola that squishes near zero

edit

I'm generally pretty good at coming up with continuous functions that fit sampled data measured from quasi-stochastic processes, but this one is stumping me.

I have a data set that can be characterized by a sequence of probability density functions. I'm trying to model the distributions.

 
My model so far. X axis is -1 to 1, range of parabolic distribution functions. Y axis are values associated with my data. Z axis is probability.

The distributions are well-modeled as parabolas, pointing either upward or downward, spanning the interval {-1,1} with the total area under the curve being 1.0. The whole parabola must be above zero.

For the purpose of coming up with a function that describes how the parabola's parameters change with respect to other dependent variables derived from my data, I have run into situations where the parabola goes negative within the interval. This is an artifact of extrapolating the smooth functions I use to model the way the parabola parameters change along the Y axis in the figure. My data set gives me the range of the Y axis, but the Y axis can extend farther in either direction. The X axis range is fixed. The actual distributions at each Y value should never go negative.

So I'm trying to come up with an equation for a parabola that "squishes" when it gets near zero. Consider the parabola y = a x2 + b. For these four combinations of a and b, here's what I want to see:

  • a>0, b>=0: Parabola's legs point upward, with a minimum in the interval. Should look pretty much like a parabola.
  • a>0, b<0: Normally this parabola would have its minimum dipping below zero. I want it to be blunted so that it doesn't quite reach zero.
  • a<0, b>0: Parabola's legs point downward, with a maximum in the interval. The ends of the parabola should be squished so they don't cross zero. This would look similar to a gaussian curve.
  • a<0, b<0: I don't care what happens in this case, because it isn't relevant to the problem.

I need another term to add to the parabola, that's dependent on how negative the parabola value would be. Any ideas? ~Amatulić (talk) 03:34, 27 November 2009 (UTC)[reply]

  meets most of your requirements. For a>0 it has a minimum at (0,eb); for a<0 it has a maximum at (0,eb) and tends to 0 for large positive and negative x. For all values of a and b it is a strictly positive function. Gandalf61 (talk) 11:00, 27 November 2009 (UTC)[reply]
Ah. Yes, that seems to be the form I need. Thanks! I was struggling with figuring out a new term to add to the parabola but this is simpler. ~Amatulić (talk) 04:29, 28 November 2009 (UTC)[reply]
On the other hand thought, that form has a really messy integral solution for area under the curve. One reason I picked a parabolic function was the ease of integration. In any case, I'll see what I can do with it. ~Amatulić (talk) 04:33, 28 November 2009 (UTC)[reply]
 
Updated model using y=a*exp(b*(x-c)^2) as a basis, rather than parabolas.

Update: Well, I ended up using

 

Unlike a parabola, there is no closed-form solution for the integral, which I need to calculate area under the curve. I ended up having to do it numerically, with a Taylor series. Slower, but okay.

It looks like it does what I need, though. To the right is my model updated using this function as a basis, rather than parabolas. It looks like a parabola that squishes as it approaches zero, regardless of which way it's pointing, and that's just what I needed. Thanks! ~Amatulić (talk) 05:18, 30 November 2009 (UTC)[reply]

Question regarding a LOG property

edit

  because  

The above is from List_of_logarithmic_identities#Canceling_exponentials.

I have not come across the "antilog" feature in my book Finite Mathematics for Business, Economics, Life Sciences and Social Sciences by Raymond A. Barnett, Michael R. Ziegler,, Karl E. Byleen.

My book just says this   is the property without explaining why.

I want to understand why   equals x.

Help appreciated. --33rogers (talk) 07:54, 27 November 2009 (UTC)[reply]

Suppose  ; then by definition,   (see logarithm for the definition). However, since we defined  , we substitute and obtain  . Hope this helps, --PST 08:02, 27 November 2009 (UTC)[reply]
Also, "Antilog" is basically the inverse operation to "log" (analogous to "addition" being the inverse of "subtraction" or "multiplication" being the inverse of "division"). Thus,  , if you like. Notice that   (Prove!). (Note also that many concepts may not exist in the book you mentioned but this does not imply that they do not exist) --PST 08:11, 27 November 2009 (UTC)[reply]
To become acquianted with some of these concepts, you might like to attempt the following (try to solve them without looking at the hint):
1. Prove that  .
(Click the "show" button to view the hint or the "hide" button to hide it.)
Hint: Let   and let  ; you wish to show, by definition, that   - write the power in terms of m and n, and then compute (apply index laws)!
2.Prove that   (Hint: Solve question one first, by applying the hint if necessary).
--PST 08:32, 27 November 2009 (UTC)[reply]

The answer to 1 and 2 is listed on List_of_logarithmic_identities :D

  because  

  because  

--33rogers (talk) 09:06, 27 November 2009 (UTC)[reply]

 

I got this:

Suppose  ; then by definition,  

(The above being exponent to b)

but I keep getting stuck on where you are substituting what? --33rogers (talk) 08:47, 27 November 2009 (UTC)[reply]

Since  ,
 ,
so given  ,
we obtain  . --PST 09:00, 27 November 2009 (UTC)[reply]
Since they are equal aren't we supposed to end for example here:
  (or something similar where both sides are equal)? --33rogers (talk) 09:09, 27 November 2009 (UTC)[reply]


I am getting to here b ^(b^y=x) = x via substituting. --33rogers (talk) 09:13, 27 November 2009 (UTC)[reply]
Colloquially,   "is what you need to power b by to get x". Or, "b power something equal to   is b power  , so since b power y equals x, so does b power  ". --PST 09:46, 27 November 2009 (UTC)[reply]

I am trying to solve   so that in the end the right hand side (of = ) is the same as left hand side.

I am finding it very difficult to decipher your last response. Please write the math equations step by step in separate lines. Thanks.

--33rogers (talk) 19:58, 27 November 2009 (UTC)[reply]

If you are unable to understand that, I am starting to doubt whether my help will be of any use to you. Please attempt (some effort) to understand that which I have said, prior to posting. I am sure that with a little bit of thought, it is quite clear. I shall attempt one last time at helping you; otherwise, I think that it is better for someone else to intervene:
1.  
2.  
3.  
Each step above is very clear - use step 1 and step 2 to conclude step 3 (or, more specifically, substitute step 1 into step 2 to conclude step 3). There is absolutely no deep thinking required. --PST 01:14, 28 November 2009 (UTC)[reply]
You seem a little confused as to what constitutes a proof of these identities (such as  ). Starting from the identities and leading to a tautology (such as 2=2) is the exact reverse of what you want to do; you want to start with statements that you know are true or valid (in this case, saying "let y=logb(x)"), and leading to the conclusion. In this case, since the argument PST gave above works for any x>0, the identity holds for any x>0.
However, it is quite easy (just not very useful) to do what you're asking; starting with the equation  , simply apply the definition of logarithms to get  . The problem with this is it proves the implication in the wrong direction. You've proven that  , where p needs to be proven and q is known to be true, but since any statement, true or false, 'implies' a true statement, this isn't useful. You really want  .
Note that the properties   and   are a direct result of the logarithm and exponential function being inverse of each other. In general, for any functions f and g, they satisfy   if and only if they satisfy  . The former is usually used as a definition of inverse functions, but the latter works just as well. Proving the implication from the former to the latter should be easy if you understood the above proofs in the specific case of the exponential and logarithmic functions. Proving that the latter implies the former is not much harder. The nested equivalences may look a bit scary, but if you work through them carefully, they should unfold pretty nicely as well. Remember that if two things are equal, then you can substitute one expression for the other.
Hope that helps. --COVIZAPIBETEFOKY (talk) 03:51, 28 November 2009 (UTC)[reply]
  Resolved

--33rogers (talk) 21:42, 29 November 2009 (UTC)[reply]

approximate percentage of true statements that can be proved in an axiomatic system?

edit

this is related to my question above: now i am wondering approximately what percent of the true statements in an axiomatic system can be proved using only those axioms? anyone have any ideas? 92.230.71.130 (talk) 09:58, 27 November 2009 (UTC)[reply]

Well, my guess would be that there are infinite number of both... Lukipuk (talk) 11:46, 27 November 2009 (UTC)[reply]
Which means that one needs to be more specific about what they mean by "percentage". My guess is that for most reasonable interpretations, and for the kind of axiomatic systems the OP has in mind, the answer will be 0%. -- Meni Rosenfeld (talk) 13:02, 27 November 2009 (UTC)[reply]
Since the set of all statements is countable, so is any subset. There are infinitely many of either (just pick one, called p, and generate infinitely many by {"p and a=a", "p and a=a and a=a", "p and a=a and a=a and a=a", ...}). So both sets have the same cardinality. --Stephan Schulz (talk) 13:18, 27 November 2009 (UTC)[reply]
I agree with Meni. Devising a meaningful way of quantifying formulas is nontrivial, but I would expect that any sensible way of doing that will make the percentage of provable statements to be zero. — Emil J. 13:36, 27 November 2009 (UTC)[reply]
The question was about provable statements among true statements. In that case I would rather expect it to be very close to 1. --Stephan Schulz (talk) 13:48, 27 November 2009 (UTC)[reply]
No it is zero, based on Kolmogorov complexity. There is a formal proof here. 67.117.145.149 (talk) 18:55, 27 November 2009 (UTC)[reply]
Depends how you encode them. If half are encoded with TRUE OR at the beginning then more than half are provable. Whaty would be interesting is if the percentage for some encoding was exactly equal to some nice number like 1/e but the equality was unprovable, but I don't believe even that can be true. Dmcq (talk) 00:07, 28 November 2009 (UTC)[reply]
You could arrange that fairly easily, I think, if we're taking the approach of fixing an order on statements and taking limits of the probability for finite initial segments. If you use a sensible order (like the order by increasing length used in the paper 67. cites), I expect you'll get 0, but if you use a silly one you can get whatever you want. Arrange matters so that 1/e of all statements are provable and 1-1/e of them are equivalent to p, where p is some fixed statement which is true, unprovable, and whose unprovability is unprovable (and all the other statements are squeezed into a set of density 0, of course. Algebraist 00:16, 28 November 2009 (UTC)[reply]
Right you are. Nice one. I was thinking of something like Chaitin's constant when one got a bit at a time from whether they were provable or not but that's not what this is about. Just reading that and I'm not totally convinced by the argument that the halting probability cannot be a computable number, just we don't know which one I'll have to think about it. Dmcq (talk)
But, I think that ordering falls outside of "reasonable", which I would hope starts with a recursive enumeration of the sentences in the language, given how the question is stated. 67.117.145.149 (talk) 21:41, 28 November 2009 (UTC)[reply]
You want reasonable!!! This is the maths reference desk. Reasoned not reasonable is the rule here :) Dmcq (talk) 22:14, 28 November 2009 (UTC)[reply]
My ordering is (or rather can be made to be, since I wasn't precise about it) a recursive enumeration of sentences. It certainly isn't sensible, though, as I thought I had made clear. Algebraist 00:53, 29 November 2009 (UTC)[reply]
Yeah, ok. I think in your description, not all the sentences appear, but you could interleave an enumeration of the language's sentences Sk (removing any duplicates) and an enumeration of its theorems Tk, so you'd get a 1:1 ratio or any other ratio you wanted. In the limit, almost all the sentences in the Sk enumeration would be non-theorems, so the ratio would be controlled by your interleaving scheme. I guess I should re-read the paper and check the exact conditions it requires. 67.117.145.149 (talk) 02:44, 29 November 2009 (UTC)[reply]
As I said above, that paper uses the most obvious ordering: it orders sentences by increasing length. Algebraist 15:58, 29 November 2009 (UTC)[reply]

Operations

edit

Reading Singh's "Fermat's Last Theorem, we have the following quote from Eichler: "There are five elementary arithmetical operations: addition, subtraction, multiplication, division, and… modular forms."

I have two questions:

1) Are subtraction and division technically unique operations? I thought they were just addition and multiplication with inverse elements. For instance, the definition of a field requires two unique operations, not four?

2) Why did Eichler consider modular forms an elementary operation? I am lost to his line of thought. I appreciate the mathematics of MFs is complicated, and way beyond my own knowledge, but if anyone could clear this up in some way, it would be most appreciated. I have a feeling it could be linked to their symmetry / invariant properties.... —Preceding unsigned comment added by 88.96.146.70 (talk) 14:55, 27 November 2009 (UTC)[reply]

I think he's talking about everyday arithmetic calculation, and the elementary operation he refers to is the "modulo" or "remainder" operation, not what mathematicians call modular forms. 67.117.145.149 (talk) 19:06, 27 November 2009 (UTC)[reply]


As to 1), speaking at an abstract level, you are right, but here four operations really refers to the very basic school maths. As to 2), here Martin Eichler certainly refers to the modular forms, not to the modulo arithmetic as 67 believes. I haven't read the source, but it's pretty clear that Eichler's is originally intended to be a catchy sentence, with the meaning that modular forms have really a huge and fundamental role in number theory. And I guess Singh, with this quotation, particularly refers to their role in the Shimura-Taniyama-Weil conjecture. There is also a kind of joke with the double meaning of elementary as either "very easy" or "constitutive", and of arithmetic as either "basic maths" or "number theory". Also note that similar sentences are quite common in books on divulgation of maths: e.g. in definitions of a mathematician: "a mathematician is somebody for whom X (put here any technical term of obscure meaning for the layman) is as a natural operation as two and two is for you. --pma (talk) 09:19, 28 November 2009 (UTC)[reply]