Wikipedia:Reference desk/Archives/Mathematics/2016 February 24
Mathematics desk | ||
---|---|---|
< February 23 | << Jan | February | Mar >> | February 25 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
February 24
editHelp me with question of reliability of mixing frequency probability with bayesian probability
editI need your help because I cannot find the answer in any Mathematics textbook. There are two types of probability. Frequency probability and Bayesian probability. I have no problems with using both of them. I trust the result of the outcomes of both of them. But the problem I have is that I have full confidence in them only when I am using them by themselves.
My problem is that when I have a mathematical problem where half the probabilities are derived from frequency probabilities and the other half are derived from bayesian probabilities and the final result is derived from the result of procedures that utilizes both kind of probabilities. Now I am completely unsure of how much confidence I can place in the result of such a calculation. No textbook tells me what would happen when both these types of probabilities are mixed together.
Can someone please enlighten me? 175.45.116.60 (talk) 03:14, 24 February 2016 (UTC)
- Can you give an example where they both appear in the same problem? Loraof (talk) 15:08, 24 February 2016 (UTC)
- You are talking about using methods from both frequentist and Bayesian schools. These are classified as Probability_interpretations. Both have ways of estimating some sort of confidence in a result. They are called confidence interval and credible interval, note the sections Credible_interval#Confidence_interval and Confidence_interval#Credible_interval. Anyway, these are notions of certainty that work within the rules of an interpretation, but they have nothing to do with your confidence in the validity of the method, or your confidence that you performed the method correctly, etc. I'm not sure if that gets at the source of your concern, but there is in principle nothing wrong with using e.g. bayesian methods to estimate a probability distribution then using that that distribution as part of some additional non-bayesian methods. At the same time, there are tons of ways you can mix and match bayesian and frequentist inference that are totally meaningless and useless. So there is no general rule for or against using methods associated with the different interpretations, and your confidence in such a method is not addressable within the scope of those methods, but rather lies in your own approach to epistemology and doxastic logic. Maybe an example will help: define statement S="X is in (0,100) with 95% confidence". Now, S is a statement that may be derived from a frequentist approach, but no frequentist method will allow you to say "I believe statement S is true with 90% confidence", or "I am 85% confident that I made no errors when deriving statement S". SemanticMantis (talk) 15:31, 24 February 2016 (UTC)
- To amplify what Loraof asked above, what do you mean in particular by "half the probabilities are derived from frequency probabilities" ? Are you simply referring to tabulations of observed frequencies? (In which case you may need some kind of smoothing method for items or categories that have low observed counts). Or are you talking about the outputs of frequentist procedures -- which are mostly not probabilities? Most practical statisticians these days in practice are "eclectic", open to using a variety of Bayesian and frequentist and empirical methods, depending on the problem at hand. But you do need to give us more information about what sort of things you are trying to combine, and why. Jheald (talk) 16:08, 24 February 2016 (UTC)
- You are justified in being cautious. A frequentist confidence interval is (a ≤ x ≤ b) where a and b are stochastic variables and x is a constant. The corresponding bayesian concept is (a ≤ x ≤ b) where a and b are constants and x is a stochastic variable. These concepts are routinely confused. However they differ, and their probabilities do not necessarily have the same value. Bo Jacoby (talk) 06:20, 26 February 2016 (UTC).
- For example, a sample of n balls from an urn of N balls. Let there be k white balls in the sample, (0 ≤ k ≤ n), and K white balls in the urn, (0 ≤ K ≤ N). Let p=k/n and P=K/N be the relative frequencies of white balls in the sample, and in the urn. The event (P ≤ p) knowing P is not the same thing as the hypothesis (P ≤ p) knowing p. Consider the case n = 2 and N = 4. When P = 50% then Pr(P ≤ p) = 83%, but when p = 50% then Pr(P ≤ p) = 70%. Bo Jacoby (talk) 08:21, 26 February 2016 (UTC).
Divisible abelian groups
editLet G be an abelian group and let H be the intersection of the subgroups nG where n ranges over the positive integers. Is H always divisible? GeoffreyT2000 (talk) 03:33, 24 February 2016 (UTC)
- Yes. If x is in H, then for any n, there exists y such that x=ny, by definition of H. Sławomir
Biały 14:11, 24 February 2016 (UTC)
- But y need not be in H. If ny = x and nmz = x, then nmz = ny but this need not imply mz = y unless G is torsion-free. GeoffreyT2000 (talk) 18:05, 24 February 2016 (UTC)
- Hmm... right. That suggests perhaps a counterexample is possible. Sławomir
Biały 18:54, 24 February 2016 (UTC)
- Hmm... right. That suggests perhaps a counterexample is possible. Sławomir
Simple monotonic functions that asymptomatically approach a value from below
editI'm looking for simple smooth monotonically increasing functions f(x) that have all the following properties:
- f(0) = 0
- as x approaches infinity, f(x) asymptomatically approaches, from below, a positive constant c
What are the simplest functions you can think of that fit these conditions? Thanks.
—SeekingAnswers (reply) 13:18, 24 February 2016 (UTC)
- Provided you are satisfied with monotonically increasing for x > 0, there is the family of rational functions
- with k > 0. The smaller the value of k, the more rapidly the function approaches c. Gandalf61 (talk) 13:32, 24 February 2016 (UTC)
- (ec)In general, for asymptotically approaches 0, so approaches the constant. Now you need to shift that graph left until . The simplest I can come up with is . The smaller , the more gradual the asymptotic approach. --Stephan Schulz (talk) 13:36, 24 February 2016 (UTC)
- How about f(x) = k * ( 1 - c^x ) ? -- SGBailey (talk) 14:19, 24 February 2016 (UTC)
- Thanks. Small correction: I think you mean f(x) = c * (1 - k^x), with 0 < k < 1. —SeekingAnswers (reply) 04:22, 28 February 2016 (UTC)
- is a standard one. Sławomir
Biały 15:04, 24 February 2016 (UTC)- for an exponential decay in the difference from the constant, very common as a solution to physical applications. Jheald (talk) 15:14, 24 February 2016 (UTC)
Integer Sequences with all levels of differences increasing?
editfor a seequence A, define dA as the sequence made up of the differences between terms. So if A is 1,3,5,8,100,... dA is 2,2,3,92,... and ddA is 0,1,89,... . I'm looking for how to generate a sequence A where for all n d^nA has only positive values in it. Setting A equal to the powers of 2 does so because A = dA = ddA ,etc. However are there integer sequences which grow more slowly than this for which this is true? (I'm thinking not)Naraht (talk) 16:34, 24 February 2016 (UTC)
- Invert the transformation to write in terms of and it's easy to see that implies . --JBL (talk) 16:46, 24 February 2016 (UTC)
- Thanx.Naraht (talk) 16:54, 25 February 2016 (UTC)
Calculating percent below X on normal distribution curve
editI think this is a very easy problem, just one I haven't encountered. I have the average and standard distribution for a standard distribution curve. For this example, assume it is avg=123 and standard distribution=16. I want to know what percent of the population being measured are below 140. I started with trying to calculate the value of the curve at 140. I used a rather nasty looking formula: (1/(sdev * sqrt(2*PI)))*exp(-1*(pow(140-avg,2)/(2*pow(sdev,2)))). However, that gives me 0.0142. I expect it to be much higher. So, I checked the value at the mean 123. I got 0.0249. This tells me that the max height of the curve is 0.0249 or that the formula I am using is completely wrong. So, I thought I'd ask here. Am I on the right track and my formula is wrong or do I need to tackle this in a completely different way? 209.149.114.211 (talk) 19:44, 24 February 2016 (UTC)
- You're mixing up the probability density function (PDF) with the cumulative distribution function (CDF).
- The PDF gives you the chance to be near a specific value. The PDF for the normal distribution is given by . In your case, it will be higher for 123 than for 140 because the items are more likely to be near the mean than near any other value.
- The CDF gives you the probability to be less than a specific value. This is what you need here. It is, of course, an increasing function.
- The CDF of the normal distribution is not elementary, but it is ubiquitous in statistics. Before there were computers there were tables giving its values, and whatever calculation system you're using for this should have it as well. Or you can use a table like the one here. To use it, first normalize: . From the table you can see the value you want is roughly 0.8554. A more accurate calculation with a computer gives 0.855996.
- Please also see Normal distribution. -- Meni Rosenfeld (talk) 20:25, 24 February 2016 (UTC)