Wikipedia:Reference desk/Archives/Mathematics/2009 May 8

Mathematics desk
< May 7 << Apr | May | Jun >> May 9 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


May 8

edit

A derivative of the log of sigmoidal functions

edit

I am trying to show that the following derivative equals -(o - r)
d/dx[o*log(r) + (1-o)log(1-r)]
where r = (1 + e-x)-1
and dr/dx = x(1-x)

I have it simplified down to x(1-x)[o(2 + e-x + ex) - ex - 1] but I can't do anything with that. Perhaps I made a mistake somewhere in the middle. —Preceding unsigned comment added by 97.77.52.150 (talk) 03:38, 8 May 2009 (UTC)[reply]

I guess you mean dr/dx = r(1-r), which follows from r = (1 + e-x)-1. McKay (talk) 08:38, 8 May 2009 (UTC)[reply]

Mathematics - Sequence analysis

edit

Is there a PRNG that will read large given sequences and produce the rest of the sequence to "n" terms? The sequences are NOT arithmetic or geometric... —Preceding unsigned comment added by 209.115.206.102 (talk) 03:46, 8 May 2009 (UTC)[reply]

Does OEIS help? 207.241.239.70 (talk) 05:45, 8 May 2009 (UTC)[reply]

trigonometric conversions

edit

how can i convert sine into cosine and vice versa? like,for example, sin270 = cos? —Preceding unsigned comment added by 122.50.134.5 (talk) 04:12, 8 May 2009 (UTC)[reply]

Note that for all a and b in R, sin(a + b) = sin(a)*cos(b) + cos(a)*sin(b). To find sin(a - b), in general, it is necessary to know that sine is an odd function whereas cosine is an even function. A function f defined on R is even if f(x) = f(-x) and odd if f(-x) = -f(x) (forgive me for my TeX notation here). Then just note that sin(a - b) = sin(a + (-b)). Then, sin(270) = sin(360-90) = sin(360)*cos(90) - cos(360)*sin(90) = 0 - (+1) = -1. Similarly, cos(a + b) = cos(a)*cos(b) - sin(a)*sin(b). Therefore, if x lies in the first quadrant, sin(90 - x) = sin(90)*cos(x) - cos(90)*sin(x) = cos(x). Try something similar for other quadrants knowing the values of sine and cosine at 90, 180, 270 and 360. As for your question, sin(270) = sin(90 + 180) = sin(90)*cos(180) + cos(90)*sin(180) = cos(180), as desired. --PST 05:17, 8 May 2009 (UTC)[reply]

Well the previous answer might be tough to go through for a beginner... Well lets keep it simple...
sin(x) = cos(x-90)
cos(x) = sin(x+90)
cos leads sin by 90 degrees. Simple as that.
So sin(270) = sin(90+180) = cos(180) = -1 Rkr1991 (talk) 08:29, 8 May 2009 (UTC)[reply]

 
 

but also:

 
 

Michael Hardy (talk) 21:21, 9 May 2009 (UTC)[reply]

The responses so far have presented argument or parameter conversion, as you requested. But you can also convert the actual function, itself:
 
thus,
 
The only thing you have to possibly adjust is signage, as everything comes out positive this way!  ~Kaimbridge~ (talk) 22:53, 9 May 2009 (UTC)[reply]

That doesn't answer the question actually asked, which said:

like,for example, sin270 = cos?

Michael Hardy (talk) 14:30, 10 May 2009 (UTC)[reply]

what does statistical confidence MEAN

edit

if a pregnancy test has only a 90% confidence level, what does that MEAN. Does that mean that a million women who are pregnant taking it will leave a hundred thousand with the result that they are not pregnant? 94.27.243.47 (talk) 11:16, 8 May 2009 (UTC)[reply]

ps. I mean on average. Obviously one such run might have 982,824, another 110,502 etc... —Preceding unsigned comment added by 94.27.243.47 (talk) 11:20, 8 May 2009 (UTC)[reply]
To not get into the hard math, it's basically if you repeated the test, how accurate the results will be. 90% of the time, you will get the same results. Q T C 11:23, 8 May 2009 (UTC)[reply]
A test like that actually has two confidence levels - the chance of a false positive and the chance of a false negative. If it only gives one, that might mean it is about the same chance. A 90% level means that if you are pregnant there is a 90% chance the test will say you are pregnant (or, if it's about negative results, it means that if you aren't pregnant there is a 90% chance the test will say you aren't pregnant). So, yes, 100,000/1,000,000 will get the wrong result (on average). You can't, however, determine the chance of you being pregnant given a positive (or negative) test result just from that confidence level - for that, you need to know what the chance of you being pregnant before you took the test was (the prior probability). --Tango (talk) 11:25, 8 May 2009 (UTC)[reply]

arghhhhh!!! Prior probability is driving me nuts. People here at the reference desk keep telling me about it!!! But it doesn't make ANY sense for me, because if you take prior probability into account tests are meaningless. Let me explain: say you invent a free energy machine, and you get scientists to device a test for it. They will put in a prior probability of 0 that you are breaking the laws of physics (since an impossibility is automatically 0) and thus no matter what the result is, there will be 0% confidence that you are actually doing what you say... If you get a positive test result, whatever the test, however many times you get it, it will always be a false positive due to the prior probability...but this plainly doesn't make any sense... (in reality, if you get enough positive test results, you start attaching it to generators and start powering cities with it, you wouldn't just keep discarding positive test results...). It's the same for the pregnancy example and the virgin mary: if the bible is to be believed, and she had not been fertilized by anyone, then if she took a pregnancy test and we actually followed your prior probability advice, we would have to put in a 0 (based on teh condition that we are believing the bible), since there is 0 chance of an egg spontaneously fertilizing itself (biological impossibility). So if her pregnancy test came up positive ten times in a row, we still wouldn't believe she's pregnant. Then if she gave birth, we would assume that the "pregnancy result" of a living breathing baby was just a false positive (the observation of the baby must be an error in our methodology or just a statistically meaningless fluke). Prior probability would have us discard an actual baby as no proof that there had been a pregnancy, it is just a false positive introduced by statistical error... we wouldn't believe our own eyes ...

you see what I'm getting at? Prior probability just makes no sense for me. If you choose a random white sock from 100 socks, and then put it into a test for whether a sock is black, it's like we can just disregard the results of the test, since there is 0 chance that it is black (since you chose it from 100 white socks) and if the test DOES say it's black, it would have to be a false positive. But this doesn't make sense to me, because after you choose randomly from 100 white socks, if a reliable test tells you it's black no matter how many times you do the test, then the normal, sane thing to do would be to look at the sock, not just to discard it as not being black.... you see what I'm getting at? 94.27.243.47 (talk) 11:38, 8 May 2009 (UTC)[reply]

No scientist would assume their theories were infallible - the prior probability is not zero, although it might be very low. In such a situation you have decide which is more likely - your theories are wrong, or all your tests are wrong. However unlikely it may be, a scientist will accept that they were wrong if presented with enough evidence. --Tango (talk) 12:47, 8 May 2009 (UTC)[reply]
but I don't see why the scientist gets to put a prior probability on the test anyway! For example, I don't see why scientists should get to put 0 (or near 0) as a prior probability on a free energy generator. Why can't the prior probability just be left out of the equation, so that the test can speak for itself? If scientists get to put prior probabilities on tests, they can just make tests arbitrarily expensive or lengthy. It doesn't make sense to me why this should be done. Actually I think a pregancy test for the Virgin Mary is a good example: why should either side (disbelievers and Christians) get to put a prior probability into it? Why couldn't she just take the test and let it speak for itself? 94.27.243.47 (talk) 12:54, 8 May 2009 (UTC)[reply]
Because you get the wrong answer if you don't take prior probability into account. A standard example is DNA testing: There are 10 million people in a city and there has been a murder. The police know that one person is responsible and that they are still in the city (don't ask me how, it's just to make the problem tractable). Some DNA is found at the scene that must have come from the murderer. The police pick a random man off the street and test his DNA and it is a match. In court the jury are told that the DNA test that was used never makes a false negative result and only makes a false positive one time in a million. What is the chance of the man on trial being innocent? If you don't account for prior probability you would probably say "one in a million", but the actual answer is "10 out of 11" (ie. the defendant is far more likely to be innocent than guilty). There are 10 million people in the city, if each of them had been tested there would have been one correct positive result (from the murder) and 10 incorrect positive results (on average, anyway). Our defendant could be any of those 11 people, but only one of them is guilty. While the chance of the test being wrong was very unlikely (1 in a million) the chance of him being the murderer was even more unlikely (1 in 10 million), so while the test increased the chance of him being guilty, it was still pretty unlikely. --Tango (talk) 13:05, 8 May 2009 (UTC)[reply]
Wouldn't it be 9/10? The murderer is not random, but he should still count as one of the expected 10. Of course, if DNA false positives are transitive, and there are unusually large equivalence classes in this city (with 15 or 20 people), they are more likely to contain the murderer. The chances of no fewer than 20 people out of 107 experiencing the 10-6 event are about 0.345%; I wonder how significant that is in terms of the expected number of (false) matches? --Tardis (talk) 14:20, 8 May 2009 (UTC)[reply]
No, its 11. There are 10 false positives and 1 true positive. Strictly, I should have said there were 10,000,001 people in the city, but the error in minute. You are correct that I unrealistically assumed the tests were all independent, but doesn't effect that point, which is that prior probability makes a big difference. --Tango (talk) 15:09, 8 May 2009 (UTC)[reply]

what good are sins and cosins and all that stuff

edit

does anyone ever use them? (i mean mathematicians). there are all these buttons, arctan, sin to the negative on, etc etc etc. then there are all these high school rules to learn about their relationships, law of cosins, all this stuff. but after high school trig class, are they actually ever used in higher math? what are some examples. Thanks. 94.27.243.47 (talk) 11:26, 8 May 2009 (UTC)[reply]

Uses_of_trigonometry Q T C 11:30, 8 May 2009 (UTC)[reply]
yes, sin and cos are differentiable periodic buttons --84.221.69.155 (talk) 13:02, 8 May 2009 (UTC)[reply]
....and each is minus its own second derivative, and that accounts for many of their applications in physics and engineering. Michael Hardy (talk) 20:09, 8 May 2009 (UTC)[reply]
I've used trig to help make circles out of cloth much narrower than the intended radius. —Tamfang (talk) 05:24, 11 May 2009 (UTC)[reply]
The co-sins sounds by far and away the most fun sins and I believe they keep the human race going :) Yes they are used. Mobile phones for instance use them in getting good reception and in decompressing music to play. Dmcq (talk) 12:01, 11 May 2009 (UTC)[reply]

Once you get to Euler's formula http://en.wiki.x.io/wiki/Euler%27s_formula exp(i*theta)=cos(theta)+i*sin(theta), you'll find that all those high school rules are actually rather obvious

someone above said 98% of cases out of a billion isn't 98% confidence -- why

edit

if someone put their hands on a batch of a dozen eggs, attempting to tell which of them are fertilized to be male chicklet and which are fertilized to be female chicklets (the distribution being 50/50), and they did this 1 billion times and in 98% of cases got 10 out of 12 right, then doesn't mean that for the first dozen that you do after the billion dozens (used to set accuracy) you will have 98% confidence that they will get 10 out of 12 right?

to me this follows very naturally, but someone above told me the answer is "no". well why not. —Preceding unsigned comment added by 94.27.243.47 (talk) 12:28, 8 May 2009 (UTC)[reply]

You could say there is 98% chance of him getting 10 out of 12 right, but I'm not sure 98% confidence is right. When we talk about confidence we mean confidence in a hypothesis. If your hypothesis is "he always gets at least 10 out of 12 right" then you have no confidence in that at all since you've seen that is doesn't always hold (it failed 2% of the time). It only takes one failed test in order to reject a hypothesis. --Tango (talk) 12:57, 8 May 2009 (UTC)[reply]
The word "confidence" is a technical term in statistics, see e.g. confidence level. Colloquially you can say as you did - though nearby statisticians may cringe - but in statistics it's called probability.
You're essentially thinking of the laying of hands on a batch of 12 eggs as a loaded 13-sided die (with 0, ..., 12 on it). You throw the die a billion times and find that it lands on face ten 98% of the time. Heavily loaded die indeed. Maybe the person doing this is trying to get the result 10/12 all of the time and that's the best they can do (still quite phenomenal skill), or maybe they always know everything and aim for the result 10/12 only 98% of the time. A billion times is enough to determine the probability to a high degree of accuracy, and saying that "the probability of 10/12 is 98%" is well within the accuracy generally implied by a two-digit number.
The model as described above may or may not fit the problem. One obvious alternative would be to model each egg independently, instead of batch-wise. This would amount to throwing a coin 12 billion times, with heads for correct and tails for incorrect. Then things would change: the probability of heads would be around 10/12, but the probability of 10 heads in a batch of 12 would be far from 98%. If the probability of heads is 10/12, then the probability of 10 heads in a batch of 12 is about 29.6% (see Binomial distribution). The data you describe clearly does not fit this model, though; you may be confident that the egg-person knows more than they let on. -- Coffee2theorems (talk) 19:28, 8 May 2009 (UTC)[reply]

I give up.

I tried to read the confidence interval article and got to this sentence:

If rolling two -- just 2! -- dice and getting double-sixes leaves a statistician 97% confident that they're loaded, then fuck statistics. 79.122.54.77 (talk) 08:23, 9 May 2009 (UTC)[reply]

Note that the 97% confidence interval only applies if we have no reason (apart from the test) to assume or guess anything about the dice (there, I avoided "prior probability" for you—but that's what I'm talking about, of course). Look at it this way: Someone gives you two pairs of dice and asks you to decide which one is loaded; exactly one of the pairs is, but you're only allowed to roll one of them. So you take one pair randomly, roll it, and get double six. Now, you had no idea before the test whether the dice were loaded or not, so after the test you can be pretty sure (97%, not fool-proof but anyway) that they were. It's a completely different matter if you go to the store and buy a pair of, presumably fair, dice, roll them and get a double six. A little curious perhaps, but not enough to even make you question the fairness of the pair—you'd need a couple of more curious results for that. And it's also a very different matter if you pick up a couple of dice you've used hundreds of times before and not suspected of anything, roll them, and get a double six. The key here is that if you don't suspect the dice to be loaded, that means you have (sort of) good reason to assume that they are fair. You need to take that into account when interpreting the results of your test. —JAOTC 10:14, 9 May 2009 (UTC)[reply]
your last sentence has finally made things "click" for me. You said "you need to take that into account when interpreting the results of your test.". Prior probability is very different from the other aspects of the test because it's kind of subjective, you have to kind of guess what kinds of probabilities you might be talking about... am I right? 79.122.54.77 (talk) 10:37, 9 May 2009 (UTC)[reply]
Not always, but in this case and many others, yes. What you're really doing (without, perhaps, thinking about it) is combining the results of several tests. The first test is just a basic knowledge of dice: most are not loaded (well, not intentionally at least). That works towards the pair not being loaded, but how much? You're right: very hard to quantify. The second test might be asking yourself where you got the dice. Would your local supermarket try to trick you by selling loaded dice? Not very likely. But how unlikely? Very hard to quantify. Still, works towards the pair not being loaded. Maybe you looked on the package, maybe it said "Fair dice". That also works towards the pair not being loaded (probably pretty much, but how much?). The roll test finally works towards the pair being loaded, and how much it does that is easier to quantify. Still, certainly not enough to change your final conclusion, which is probably something along "Cool, I got a double six on the first roll. Still pretty sure they are fair though". —JAOTC 11:13, 9 May 2009 (UTC)[reply]

Calculation problem including vector

edit

I have been tuckling some formulas in physics. I do not know how this equation stands.

 

where C,v,u are vectors. Please explain.Like sushi (talk) 15:18, 8 May 2009 (UTC)[reply]

What do you mean "how it stands"? What is your question? We'll probably need some context as well, like knowing what all those symbols mean. If your are wondering about what the equations means, then you probably want the science desk, but they'll need context too. --Tango (talk) 15:34, 8 May 2009 (UTC)[reply]
I meant "Can the right side be derived from the middle one? And if so, how?". The context is Doppler shift. C is the velocity of the signal, a emitter is moving with constant velocity u, a receiver (absorber) is moving with constant velocity v to the transmitter.
What I am reading now starts with this formula.
 
where   and   are the position vectors for the events of absorption and emittion, at time   and  .
Then it is derived that
 
then
 
The one in question follows this,making use of the dot product identity   where   is the angle between the   and   vectors,
The source isDoppler effect Like sushi (talk) 17:01, 8 May 2009 (UTC)[reply]
That source explains what is being done at each stage. What is it, precisely, that you don't understand? --Tango (talk) 17:14, 8 May 2009 (UTC)[reply]
Sorry, I see it doesn't explain the step you are actually asking about. Hang on, let me read it properly! --Tango (talk) 17:16, 8 May 2009 (UTC)[reply]
Hmmm... it's a good question! I can't see what they've done there either. For anyone else trying to figure it out, the notation   means "the angle between C and u". I'm stuck trying to work out the relationship between those two angles, and drawing a blank. It looks like it should be some very simple trig, but I just can't see it. --Tango (talk) 17:21, 8 May 2009 (UTC)[reply]

If you draw the pictures, it becomes clear that |v|, |C|, |C − v| are the lengths of the three sides of a triangle, and the two cosines are those of two of the angles. This looks reminiscent of the law of tangents. To be continued after I've looked at the details... Michael Hardy (talk) 20:21, 8 May 2009 (UTC)[reply]

factoring integers in polynomial time

edit

At the rump session of Eurocrypt last week, Claus P. Schnorr claimed there is a polynomial time method to factor integers. His slides are here. Any idea what it is that he's trying to do? Obviously, verifying the result (if it's correct) will take significant time and effort. But does the approach make any sense at all? 207.241.239.70 (talk) 18:50, 8 May 2009 (UTC)[reply]

Well, LLL was developed to factor integer polynomials, but more generally finds somewhat short vectors in integer lattices (and those vectors are the coefficient vectors of factors). Some of Schnorr's recent work appears to be geared towards adapting LLL to handle indefinite forms (notions of "length" that allow zero or even negative length), especially using methods from numerical linear algebra. He apparently has phrased integer factoring as a short vector problem in a lattice under an indefinite form. Part of the slides mention a subproblem that has no known polynomial time solution, so perhaps he is just indicating progress on the solution. The slides do not include enough information to say much more, and the preprint site has no recent work on this topic (and the papers there have typos that should not make it past basic proofreading), so one should find better descriptions of his claims before trying to really understand them. JackSchmidt (talk) 01:39, 9 May 2009 (UTC)[reply]

Z-hat

edit

I've been encountering  , which is the inverse limit of the   over all positive n, in my algebraic number theory class, but I don't have any kind of an intuition for what   is like. It's also irritatingly difficult to find webpages on the topic (does Wikipedia have an article?). So, what I'd like to know is, what kind of interesting properties does   have? Is there a good way to visualize it? Is there a nicer characterization than by the inverse limit? Anything that would help build intuition is of interest to me. Eric. 131.215.159.91 (talk) 21:43, 8 May 2009 (UTC)[reply]

You might check if Z-hat is the direct product of the p-adic integers over all primes p. I think something like inverse limits commute with inverse limits is true, and Z/nZ is the direct product of its sylows. Adele ring has some information on this. JackSchmidt (talk) 01:00, 9 May 2009 (UTC)[reply]
Thanks, that does help. Eric. 131.215.159.91 (talk) 19:21, 10 May 2009 (UTC)[reply]