Talk:Ellsberg paradox

Latest comment: 2 years ago by Moonraker12 in topic Templates

Psychological explanation

edit
The following relevant discussion was copied from Unmet's talk page

Thanks for your addition to the entry on the Ellsberg paradox. I must say I don't understand what you are getting at. Would you might modifying the entry to be a bit more clear? I am also worried that the explanation may mislead readers. The problem is not that individuals are choosing an option that has a worse expected utility for them, but rather that there is no utility function that can account for their behavior without including some sort of disutility for ambiguity (or someting). Anyway, thanks again for your interest! best, --Kzollman 07:38, Jun 22, 2005 (UTC)

dear kevin, thx for your note. sorry, i haven't made myself clear - would you help me to clarify it? i would define 'mistrust to a stranger' as utility function - probably one could also interpret it as 'disutility for ambiguity', but i was trying to underscore that individuals behave reasonably when choosing to trust a relative and distrust a stranger. this seems to be the case, when we have mental reasoning that Y balls are less then 50% in first gamble and more then 50% in the second. the only constant i see, is a disbelief in possibility to win all, like in case with sister and brother. probably, i am defining 'utility' too broad, but then a question arises - should psychological terministic screens be used when one is dealing with economic terms? best twice, - unmet 22:36 jun22 2005

Are you suggesting that the choices individuals make in the paradox depend on who loads the urn? If so, do you know of any studies that demonstrate this has an effect? best, --Kzollman 03:12, Jun 24, 2005 (UTC)
well, sure - imagine you have to put balls yourself and then gamble (which is the same as a loader's cooperation), isn't that obvious that results would change to B and C? regards, - unmet 02:36 jun24 2005
I see. It seems plausible that individuals think something is afoot and so choose A and D. Do you know of any psychological studies that conclude this is, in fact, what is happening? It would certainly be nice to have such a reference. Additionally, while this is an explanation, I would like to add a comment that it does "resolve" the "paradox" since individuals still fail to have a consistent estimation of the proportion of each color in the urn. If its alright with you, I will make a few changes to clear up the explanation, add a reference if you have one, add the sentence pointing out this does entirely resolve the paradox, and copy this discussion to the talk page of that article. Cool? Thanks for the discussion! --Kzollman 05:38, Jun 25, 2005 (UTC)
i know a lot of psychological studies where effect of an experimentator on the results was recorded. take famous milgram experiment for example. i do not know any psychological studies to that effect about ellsberg's paradox, but based on the above methodology they wouldn't be hard to do. though, frankly, i doubt that anyone would decide to check benefits of loader's cooperation - at this point, there are pretty obvious for me. anyway, i'll will be on a look out for a good psychological reference. sure, u welcome to make any changes u find appropriate and big thanks for help with clarifications. - unmet 20:32 jun26 2005
After much time, I have rewritten it. Tell me if that captures what you're thinking. Thanks again for all your help! best, --Kzollman July 4, 2005 19:26 (UTC)
i still meditate on your rewrite. i see couple points you are trying to make, but somehow the whole construction become very cumbersome. may be too many negatives? makes hard to understand. also, i was kinda fond of the whole approach to this class of experiments - through interjecting oneself into a situation and see what authenticity is left then. anyway, let me think few more days. may be you'll see a way to clear it up even more. best - unmet 02:16 jul06 2005
ps: one more thought - when someone suggest to gamble with us, it automatically means that we can not get the best outcome, otherwise why someone was suggesting it at all? your last sentence, makes a connection between two gambles, in fact, nothing would change (in the original formulation or in the psychological solution) if C would consist of red and black balls. noone would suggest to gamble with bad odds, at least not smart scientists - this line of thinking explains the paradox. unmet 15:57 jul06 2005

Cheated? Or just risk averse?

edit

I seriously doubt that being 'cheated' is the most likely explanation of the paradox. In simpler studies of this type, when forced to choose between a sure thing and a bet, the expected value of the bet must usually be significantly higher before the person will choose it. This is usually explained as individuals just being averse to risk. In this case, taking the less known option gives a 50% chance of a worse option, for no greater pay, so risk aversion completely explains the participants behavior without needing to invoke the idea that the dealer is trying to cheat them. --Kevin Saff 21:22, 14 July 2005 (UTC)Reply

thx for your remark. try imagining the paradox when choices must be made by two people - choose which option they prefer to play. then, their trust in dealer would be a better predictor then "risk aversion". risk aversion is a moot term, could we predict who will choose B? there are some people that do choose it. but we could easily have two groups, different in their trust to a dealer, and have 99% certainty in the results of their respective choices. unmet 19:16 jul14 2005

I have removed "The mistake the subjects are making..." In fact, the entire last section is speculative. Can we cite some sources on this? For instance, was this the explanation given by Ellsberg? If so, can we cite it as such? Fool 14:39, 15 August 2005 (UTC)Reply

I'm fine with removing the final section or finding sources to support it. See the above discussion. Simple risk aversion cannot explain the behavior of individuals in the Ellsberg paradox, however. The individual is taking a risk for both bets, and so, we would need some "meta" risk aversion (risk over the expection of bets), which is usually refered to as "ambiguity aversion". --best, kevin ···Kzollman | Talk··· 04:40, August 17, 2005 (UTC)
Well okay, but that point is made elsewhere ("...regardless of the agent's utility function or risk aversion..."), so wouldn't you say it's better to leave it at that, instead of going on to confuse the issue? Maybe, if nobody objects, I will delete the last section and expand a bit on that point. Fool 03:25, 18 August 2005 (UTC)Reply
Sounds fine to me, but I didn't add the last section. --best, kevin ···Kzollman | Talk··· 16:11, August 18, 2005 (UTC)

Probable solution: subjects misinterpret the scenario

edit

I think the probable solution is that many subjects misinterpret the scenario, just as I originally did! (I was in the process of writing a refutation, when I realised my mistake!) If subject assumes that they will be repeating the same gamble, using the same bag of balls each time (with balls being replaced before next draw), then, although A and B have same initial expected payout, A is less risky that B; similarly C and D have equal expectations, but D is less risky than C (the less risky gambles having the a fixed expection, whatever mixture of balls are in the bag - within defined constraints). In this case, rational people having any positive risk aversion prefer the lower risks; those with negative risk aversion prefer the risky ones.

Considering a single gamble is a bit artificial and "everyone" knows that probabilities only apply to averages of many runs of the "same" event. Even so, why should one assume that the "same" event implies using the same bag of balls each time (with chosen ball returned to bag), rather than a new bag of balls (or totally reconstituted contents) each time (albeit satisfying the specified ball distributions each time)? The analysis of the latter is the same as the one presented in the live page, but the former is what I initially assumed, perhaps because, subconciously, I knew it would be simpler to do in reality.

Most people probably do not choose their risk aversion rationally, especially in a hypothetical problem, but there is a rational attitude to risk, depending on the utility of the payouts to that person. With the above misinterpretted scenario, if, say, a below-average payout had no utility, then the low risk gambles are preferable; but if, say, only above-average payouts have any utility, then high risk gambles are preferable. (Strictly speaking, it may be rational to choose lower expections and various risks (variability) of monetary payout; but in terms of utility function, it is always rational to maximise utility, and risk (variability) of utility is irrelevent (personal preference being subsumed in the personal utility function).

John Newbury 22:36, 22 August 2005 (UTC)Reply

I don't think it was a hypothetical problem. I'm not sure that it was actually $100, and I didn't write that bit, but these sorts of experiments can be done with real payouts. But, maybe we need more details on the experiment, and preferably a cited reference.
Also, why was "30" changed to "20"? I don't know that there were 30 or 20 or 100 red balls, and again I didn't write that bit, but it seems to me that there ought to be half as many red balls as black+yellow balls, making the probability of drawing red 1 in 3. No?
In any case, standard utility theory doesn't countenance any of that sort of thing: you must assign a probability of winning to each gamble, and since there are only two payouts (0 or $100), your choice must be entirely determined by this assignment. I happen to agree that choosing A and D is a reasonable thing to do, but that just makes the paradox stronger, doesn't it? Fool 17:10, 23 August 2005 (UTC)Reply
Sincere apologies for my balls up (or balls down!) of a change. Evidently I cannot read - I presumably thought 60 was the total of all balls.
In this discussion I should have said "artificial" rather than "hypothetical". Some other nonsense: I should have said, "If, say, the change of utility with payout is larger when payout is below average than when above, then low risk gambles are preferable; but if the other way round, then high risk gambles are preferable."
I should use your moniker!
John Newbury 18:02, 16 October 2005 (UTC)Reply

Ambiguity aversion and drama theory

edit

The 'psychological explanation' that has been cut from this page resembles a simple (perhaps too simple) version of the drama-theoretic explanation of ambiguity aversion. I've added a link to this explanation. (I've also corrected the number of red balls from 20 back to 30. 30 is what it should be, without doubt.)

The drama-theoretic explanation invokes the 'personalization hypothesis'. This says that humans interacting with an entity in a signficant way will consciously or subconsciously personalize that entity -- ie, think of it as another party in an ongoing dramatic interaction. Luck (as in 'Luck be a lady') or death (as in 'Death has come to take me') are almost inevitably personalized in this way. Now what an experimenter presents as an ambiguity will, according to this hypothesis, be experienced as something that's chosen by the (personalized) experimental set-up. An assurance that the set-up will make an impartial, disinterested choice may be distrusted. The resulting 'trust dilemma' is reduced or eliminated by choosing an alternative that requires less trust to be placed in the experimental set-up.Nhoward 19:17, 23 August 2005 (UTC)Reply

Interpretation as a competition game

edit

Two things. First, chosing the pairs A and D, or B and C respectively insures that you get $100 with no risk. I find that attractive and not at all irrational. Second, this is the Nash equilibrium (ie, the minimum amount you are sure to gain) for the game. -- KarlHallowell 19:46, 23 August 2005 (UTC)Reply

Karl - The two gambles are for different draws of the urn. Thanks for pointing out this omission, I'll add this to the article. --best, kevin ···Kzollman | Talk··· 16:38, September 2, 2005 (UTC)
I think (alas, original research) that competition is the simplest explanation.
If my best friend is going to load the urns, and he gets to pick between X black balls loaded or Y black balls loaded, where X <= 30 and 30 <= Y, and he knows I have the choice between A or B, then my optimum choice is B, even though I don't know X or Y.
If my best friend gets to re-load a *different* urn with fresh balls for the next gamble, and he gets to make a fresh decision between V black balls loaded or W black balls loaded, where V <= 30 and 30 <= W, and he knows I have the choice between C or D, then my optimum choice is C, even though I don't know V or W.
However, if I don't know who is loading the 2 different urns, and I leap to the conclusion that the person loading the 2 different urns has the same choice between X and Y and is in collusion with the person *offering* the money (even though the person offering the money doesn't know V or W or X or Y), then only Nash equilibrium is for me to choose A and D.
I'm guessing most people leap to the same conclusion, and overlook the little detail about the draw(s) being from the *same* urn.
Even when I realize that both draws are from the same urn, picking A and D is still a fairly rational choice. When I continue to assume the person loading the urn is in collusion with the person offering the money, picking A and D has no worse expected payout than any other choice (unless the loader is biased towards me). If the colluding person loading the urn does so after he hears (or correctly guesses) my choices, then the expected payout after the 2 draws is: AD: $100, BC: $100; AC: $60, BD: $60 (there are 2 Nash equilibriums, as KarlHallowell points out). --68.0.124.33 (talk) 19:17, 20 August 2008 (UTC)Reply

Removed drama theory reference

edit

I have removed the following text from the article:

Note: Ambiguity aversion can be explained psychologically using the 'personalization principle' of Drama Theory -- though this explanation doesn't reconcile the paradox with expected utility theory.

The reference mentioned there is a posting on a web form by an unsigned author. It does not meet the requirements of notability. In addition, the explanation from the web forum is almost exactly the same as the explanation that is discussed above (and the consensus decision was that it should be removed). --best, kevin ···Kzollman | Talk··· 16:37, September 2, 2005 (UTC)

How can this be true?

edit

I don't think this model fits reality. Many people love gambling or the lottery, even when they know that their expectation value is a loss; and there would be many more if they weren't held back by ethical/religious concerns. How can this model overlook this simple fact? Common Man 06:38, 28 October 2005 (UTC)Reply

Common - Are you concerned that utility theory doesn't account for people's participation in gambling games? Ellsberg's paradox is trying to illustrate just this point, that gambling behavior cannot be accounted for by the standard utility theory. However, simply engaging in gambling doesn't necessarily disprove utility theory since the excitement at the prospect of winning a lot of money might be worth the small sacrifice in expected monetary returns. --best, kevin ···Kzollman | Talk··· 18:17, 28 October 2005 (UTC)Reply
Thanks for your reply. No, I didn't mean that utility theory doesn't account for it. I have two problems:
  1. This experiment contradicts my experience. I don't know where the claim that people have "risk aversion" comes from. Even I would have chosen the risky B+C over the safe A+D. Since I'm certainly more "risk averse" than gamblers I would expect many other people to make that choice, as well.
  2. I don't see a paradox. Why on earth should people make their decisions based on an expectation of how many Black and Red balls there are? A rational player would make no such assumption. (Provided he/she trusts that the experiment wasn't set up to cheat you, which I assume is meant by providing two complementary choices. (On a further side node, it still doesn't add up - a malicious organizer still has an advantage if Y=0, but I took this to be a flaw of the setup, rather than its point.)) My only assumption was about my own utility function, which I assume is concave, so that (U(0$)+U(200$))/2>U(100$). (To be honest, it probably isn't and I'm merely rationalizing my decision, which is based on non-utilitarian aspects such as joy of playing. But my point is that there is no paradox - it can be explained by utility theory.) Common Man 20:43, 28 October 2005 (UTC)Reply
Sorry I misunderstood. So, you would choose B and C which differs from most people, that's not all that surprising since people are not all the same in their choices. However, there are studies that indicate most people choose A and D. Risk aversion in this case doesn't really apply since the risk is comprable in all cases. The question is how ambiguity averse you are, in this case it looks like you are "ambiguity seeking" if I may coin a phrase.
With regard to your second point, one cannot maximize one's expected utility without making some estimate of the number of black and yellow balls in the urn. One must determine the probability of receiving $100 which depends on the probably of drawing red, black, or yellow balls. The utility of Gamble A (for example) is the the probability of drawing a red ball multiplied by the utility of $100, right? The problem is a paradox regardless of your utility function since the payoffs for all gambles are the same so the term U($100) is common to both sides of every equation. If you think that your preference for B and C can be explained using standard utility theory, please provide the probability assignments which make the U(Gamble B) > U(Gamble A) and U(Gamble C) > U(Gamble D). I must admit that I am skeptical. --best, kevin ···Kzollman | Talk··· 01:08, 29 October 2005 (UTC)Reply
Thank you for your patient explanation. I am convinced that it's true now. I'm sorry - I confused this with a similar experiment, which asserts to prove that people are "risk averse" because they prefer getting $100 with 100% certainty over $200 with 50%. Please feel free to delete or edit what I wrote. I still have a problem with this experiment, but I'd like to get back to it later, as I'll be awk for some time. I hope to meet you here again soon. Common Man 05:58, 29 October 2005 (UTC)Reply

Same draw / different draw

edit
It is notable that in this particular game, and other equivalent ones, the apparent paradox can be accounted for by the gambler making a decision to not 'put all her eggs in one basket'. Taking Gambles A and D together allow her to 'cover her bases' so that in the event that she loses one of the gambles, she will win the other, since they are mutually exclusive when made on the same draw, and their probabilities sum to one...

It's for a different draw on the same urn. -Dan 13:52, 17 January 2006 (UTC)

If you're saying that you get two gambles (one of A or B and one of C or D) on the same urn without it being shuffled in between, then there is no paradox; people are just making the wrong choice.
You have three possible outcomes: lose both ($0), win one ($100), or win both ($200). Assuming (as is the case for most rational people) that U($100)-U($0) > U($200)-U($100), then you should strictly prefer taking gamble B and gamble C. All pairs of choices have the same EV of $100, but {B,C} minimizes variance and hence maximizes utility for the risk averse.
Let x be the number of black balls in the urn. Then your chance of losing both gambles is:
{A,C}: 2/3*x/90,
{A,D}: 2/3*1/3,
{B,C}: (1-x/90)*x/90,
{B,D}: (1-x/90)*1/3.
Since EV is $100, your chance of winning $200 is the same as your chance of winning $0. Standard deviation will work out to be $100*sqrt(SUM_x=0->x=60(Prob(experimenter chooses x black balls)*2* Prob(you lose both))).
Assuming only that the experimenter's method is fair (i.e. SUM_x=0->x=60(x * Prob(experimenter chooses x black balls)) = 30) and that he doesn't use the trivial method of choosing x=30 %100 of the time, it can be shown that {B,C} will always have minimal std. dev. out of the four choices.
Note that if you're limited to one gamble per shuffle they all have the same variance, so there should be no preference. It's only when you use the same urn more than once that differences come up. Zigswatson (talk) 22:34, 11 April 2008 (UTC)Reply

Comment on the paradox

edit

I don’t disagree with the general thrust of this article, but the “if and only if” argument above does not take into account the case when the two options are thought to have the same likelihood. At the risk of a double negative, we could say that Gamble A will be preferred if it is thought that a red ball is not less likely than a black ball. Even this way of expressing the rationale does not adequately take into account how a preference is made between two “equal” choices, measured by utility. Consequently, the mathematical demonstration should have greater-than-or-equal-to signs in the inequalities below. The conclusion is that B = 1/3, rather than a contradiction.

It should also be pointed out that there is consistency in the observed behaviour since the odds are known precisely for Gamble A and D, but have an unknown amount of variability for B and C. The inference that might be drawn from the observed results is that several factors are considered when making decisions, especially when there is little difference between one of the major criteria. A link to Portfolio theory might be a useful addition to this post, since Portfolio theory considers how one should consider the variance as well as the mean when making decisions.

I leave these suggestions to somebody more familiar with the topic to put into context.

The preceding comment by User:152.91.9.8 was moved here from its original place in the article itself.

Inconsistency

edit

In the first section, it says "it follows that you will prefer Gamble A to Gamble B if, and only if, you believe that drawing a red ball is more likely than drawing a black ball", or R > B. However, in the "Mathematical demonstration" section, ">=" is used instead of ">". Is this an inconsitency or am I wrong here? Rbarreira 13:18, 5 February 2007 (UTC)Reply

Mathematical demonstration

edit

In the mathematical demonstration, I don't think it's necessary to require U(100) preferred to U(0). The paradox holds as long as we consider that both U(100) and U(0) are constant throughout.

Running through the maths you can easily show that R[U(100)+U(0)] > B[U(100)+U(0)] in the first case and... B[U(100)-U(0)] > R[U(100)-U(0)] in the second.

Hence, it's not actually necessary to make a judgement over preferences beyond that A is preferred to B and D to C. Jamesshaw 15:30, 11 May 2007 (UTC)Reply

Right: just noticed this is stated a little later, but still not sure why necessary to make this assumption in this section. Jamesshaw 15:32, 11 May 2007 (UTC)Reply

A hedge solution.

edit

Gambles A and D hedge each other. That would be the trivial solution to why they are both preferred. There could be 0 black balls, and yet there could also be 60. We would like to cover both possibilities. --76.217.95.43 (talk) 02:47, 24 February 2008 (UTC)Reply

Yes, picking both A and D is a hedge. But that doesn't explain why surveys show that most people actually do pick A and D.

Situation 1: You are allowed to pick either A or B (or neither) on one draw, and also allowed to pick either C or D (or neither) on the *same* draw.

Picking A and D is hedge -- if you pick A and D, then you get a guaranteed $100 no matter which color ball is pulled.

However, picking B and C is also a hedge -- if you pick B and C, you also get a guaranteed $100 no matter which color ball is pulled.

Situation 2: You are allowed to pick either A or B (or neither) on one draw. Then the ball is replaced, the urn shaken, and you are allowed to pick either C or D (or neither) on the *next* draw.

There's a lot more uncertainty here, but the expected return is the same as before: Picking A and D is a hedge, with an expected $100 for the 2 picks of Situation 2. Picking B and C is also a hedge, with an expected $100 for the 2 picks of Situation 2.

So why do surveys show that most people pick A and D, rather than B and C? --68.0.124.33 (talk) 15:57, 20 August 2008 (UTC)Reply

Look up at my section 'Why isn't it just comparing risk to knowledge?' and it should be obvious why people choose what they do. —Preceding unsigned comment added by 69.205.97.220 (talk) 10:24, 2 March 2009 (UTC)Reply

Let's be a little more clear. Remember, you are not given both choices at once. So, in the first choice, you make the choice with the more certain outcome, with the reasoning as presented in the article: the unknown choice is more likely to the worst one in the real world. Then, you get to the second choice, and then you hedge your bets. So you get either AD or DA.
And, yes, I know this page isn't about improving the article, not actually discussing it, but the latter seems to be what is actually happening. And, anyways, a source could always to dispute or verify this stuff. — trlkly 12:14, 30 November 2010 (UTC)Reply

What if individuals face such bets repeatedly in life and treat them like a portfolio of bets?

edit
Previous discussion deleted and comments revised by OP
==I think the math and economics are both wrong here.==

First, the math. When the article's author solves for the expected utility from a black ball (bet B), he violates Jensen's inequality.

For bet B, the individual faces an unknown distribution between 0 and 2/3. With no information on the distribution, a rational individual will use the zero information probability distribution --> the uniform distribution (you can disagree with this if you want, but the result holds for any probability distribution, so there's no point). So with a discrete number of balls, the individual has a utility function over a discrete uniform distribution, or:

 

When the author writes utility as:

 

He's taking the expectation from outside the utility function and moving it to the inside (recall that the sum here is an expectation). But you can't do this. E(U(x)) is strictly less than U(E(x)) when individuals are risk averse.

Now the economics -- individuals choosing B do not violate the expected utility hypothesis. They are rational risk averse dudes, preferring a tight distribution of payoffs to a fat one. More importantly, Ellsberg does not seem to argue this in his article. He argues that preferences do not reveal unique probability comparisons, as some other dude (Savage) had maintained. He's right. But I see this more as implying that the expected utility hypothesis does not imply Savage's axioms hold when comparing fixed and random payoffs. Regardless of Ellsberg's intent, the choice of B is perfectly consistent with risk-averse choices in the expected utility hypothesis. — Preceding unsigned comment added by 128.151.203.76 (talkcontribs) 21:32, 3 July 2008 (UTC)Reply

Erm. Are you sure this is right? The payoff is never anything but $100 or $0. Suppose in reality n=45. You seem to think your utility is in fact U($50). Your utility of a state in which you have an 50:50 chance of getting $100 or $0 is half of U($100) plus half of U($0). That is the expected utility hypothesis. This may very well be less than U($50) if you are risk averse, but this never comes into the picture because the payoff is never $50. --EmbraceParadox (talk) 14:38, 4 July 2008 (UTC)Reply

NOTE: Revised from first effort. Thanks for the comment.

If an individual faces a choice between U(x)=5 with certainty and a bet where there's a 0.5 chance that U(x) = 6 and a 0.5 chance that U(x) = 4, his expected utilities are equal. But if he's making many such bets in sequence, then he doesn't really have to pick one or the other. He can use a mixed strategy over time as though he were constructing weighted portfolios in each alternative. And it's possible that this mixed strategy will have higher utility than always picking one or the other.

Let's say a value maximizing investor has the option of allocating between two bets. Call them A and B. A delivers $100 with probability 0.5. B has two cases. Case 1 has 50% probability, and pays $100 40% of the time. Case 2 has 50% probability, and pays $100 60% of the time.

Say that the weights on A and B sum to 1. Call the weight on A = A, so the weight on B = 1-A. The investor's objective function looks like:

 

Which reduces to:

 

Taking the derivative with respect to A and setting = 0 to maximize, you get:

 

 

The optimal value of A will depend on the choice of U, but I'm pretty sure (though haven't proven) it will almost always be greater than 0.5 for risk averse investors. For illustration, suppose U(x) = sqrt(x). Then U'(x) = 1/(2*sqrt(x)). Inserting this function, multiplying both sides by 2, and taking the reciprocal of both sides gives:

 

Divide by 10 on each side to get:

 

Square to get:

 

Which gives A = 0.812744 > 0.5. So A gets a higher weight.

This doesn't necessarily imply that if you're forced to choose between A or B, you choose A. However, think of the experiment as being a repeated game and not a 1-off thing. Rational risk-averse individuals would choose A more often and B less often as if they were putting together a weighted portfolio.

I have to stop now. Comments/further development welcome. — Preceding unsigned comment added by 128.151.203.76 (talkcontribs) 23:19, 4 July 2008 (UTC)Reply

I think your very first equation is wrong. You have that your objective function is (the expected utility of the payoff from bet A) plus (the expected utility of the payoff from bet B). It should be the expected utility of (the payoff from bet A plus the payoff from bet B). So you'll have eight terms (six if you set U($0) = 0, which is fine). For that matter, and for the same reason, I think the comparison between mixed strategy and weighted portfolio is wrong. The expected utility of going one way once and the other way once is not the expected utility of choosing randomly (50:50) twice. --EmbraceParadox (talk) 17:59, 5 July 2008 (UTC)Reply

This is wrong. Look at the second equation.

 

You can write this as:

 

Taking the derivative and setting =0 will always yield A = 0.5. —Preceding unsigned comment added by 74.74.158.35 (talk) 23:48, 17 July 2008 (UTC)Reply

I kind of agree, but would state it this way: It makes absolutely no sense to analyze the bets under the assumption the gambler will makes a guess about the relative probabilities of yellow vs black balls, because no information is given. A proper analysis of the probability would include all possible ratios of black to yellow balls as part of the set of outcomes. From this point of view, although I'm being informal here, the probability of winning bet A is 1/3, while the probability of winning bet B is between 0 and 2/3 .. 0 if there are no black balls, and 2/3 if there are 60 black balls. The gambler prefers bet A if he prefers a bet with known probabilities, similarly the gambler prefers bet D if he or she prefers a bet with known probabilities. So it's not surprising surveys show most prefer bets A and D! Another aspect of this problem is that it's not well-defined as a problem in probability, because there is no information given about the distribution of possible ratios of yellow to black balls. It's like asking whether someone would prefer a bet of heads in which a fair coin is tossed, or a bet of heads in which a coin of UNKNOWN fairness is tossed .. in the first case, you know your odds. In the second case you have no information about the odds. (HOWEVER because you are told what your bet must be -- "heads" -- you might suspect the party proposing the bet has stacked the odds against you.) —Preceding unsigned comment added by 67.8.154.37 (talk) 14:06, 3 March 2011 (UTC)Reply

The description is why it is not risk averse is a bit confusing

edit

I think the user strategy is exactly risk aversion.

By choosing strategy (a) he is guaranteed to have 1/3 probability of winning, if he chooses (b) he might have 2/3 probability or he might have 0, depending on the game host.

Similarly choosing (d) he is guaranteed to have probability 2/3 of winning, whereas by choosing (c) he might have probability of 1, or 1/3 of winning. In both cases the probability of winning in scenarios (b) and (c) are determined by the game host, and users try to protect themselves against adversarial behavior... — Preceding unsigned comment added by 76.124.186.208 (talkcontribs) 9 February 2009 (UTC)

Why isn't it just comparing risk to knowledge?

edit

I'm looking at the "paradox" and it seems obvious that this shouldn't violate any numerical reasoning. My argument: In the first scenario, A is a sured number (30) where B is the 'gamble', but in scenario two, it's the other way around, A is the gamble (30+?), but B is the sured number (60). How could someone interpret this as having a dissonance or not following a simple rule? I think if someone is more risk-taker than not, it's a B-C combo, but if they're not a risk taker, it's an A-D combo. Anyone care to help where a "paradox" comes in? — Preceding unsigned comment added by 69.205.97.220 (talkcontribs) 10:22, 2 March 2009 (UTC)Reply

You just proved the paradox. According to utility theory, if you have a preference for A then you should also have a preference for C. If you have a preference for B you should also have a preference for D. Just the opposite of what you suggest. Of course, the whole point is to show that utility theory does not is not a complete model of decision making. 68.94.91.206 (talk) 20:58, 28 July 2012 (UTC)Reply
I must disagree with the statement above (beginning with "You just proved the paradox.") If you have a preference for A (risk averse) than you should have a preference for D. Both choices have a definite expected value. In contrast, choices B and C involve some element of chance and would be better suited to a risk taker. My intuitive choice was originally A-C and after thinking about it for a while I stuck with my A-C combo despite A-D "technically" being the most risk averse. {I have a risk-averse personality}. I believe the economic theory that ~"The negative effect of a dollar lost is greater than the positive effect of a dollar gained" (Risk aversion/marginal utility) makes the clear logical choice A-D. Although modern psychological theories would suggest that an unexpectedly high number of humans would choose A-C (both "RED" options). Alexbucg (talk) 21:50, 1 March 2013 (UTC)Reply
A is to D as B is to C in terms of risk. The argument that those who prefer A should have a preference for C is literally based on the COLOR of the balls, which is irrelevant. The utility theory interpretation incorrectly ties utility to color, when it should be tied to risk. This is not a paradox. Anongork (talk) 08:03, 6 April 2013 (UTC)Reply

"According to utility theory, if you have a preference for A then you should also have a preference for C." Citation needed. Utility theory predicts that goods distributions with higher utility are preferred. Because of the anti-correlation between the number of black and yellow balls, adding wins for yellow balls reduces the variance of the outcome for game B=>D and increases it for A=>C, while keeping expected value(A)=E(B) and E(C)=E(D). The results are fully explained by supposing that when expected value is equal, people prefer distributions with small variance to distributions with high variance. For instance, you predict the experimental results if you say utility(distro) = mean value / (stdv+1), which is a perfectly fine utility function. Philgoetz (talk) 18:13, 12 February 2015 (UTC)Reply


The simplest explanation.

edit

People are stupid and don't understand math. It's that simple. It's what keeps vegas going.

Or maybe people really do prefer a known bad deal to a deal that probably will be better, but maybe not.

I wonder if the results might vary by culture. —Preceding unsigned comment added by Paul Murray (talkcontribs) 23:17, 11 March 2009 (UTC)Reply

Paradox??

edit

Chosing A and D is the most rational choice. With the aproach suggested in the article one could be easily fooled into poor choice by using 30 red and 60 black balls. When presented with the first problem, you would logically choose A. Clearly the approach used in the article leads to a poor choice, as chosing C in this situation leads to a lower chance of winning. It's irrational to assume that the choice you made in the first gamble was correct.--Ancient Anomaly (talk) 15:19, 15 December 2010 (UTC)Reply

How is this a "paradox"? People don't like uncertainty. If someone's model fails to predict this aspect of human behavior, the model is flawed, but that doesn't make this a paradox. Please update the article to explain more clearly where the "paradox" lies, if there is one. 129.219.155.89 (talk) 18:35, 14 January 2014 (UTC)Reply

Different choices

edit

The guaranteed probability of getting a red ball is 30/90. The guaranteed probability of getting a non-red ball is 60/90. The probability of getting either yellow or black is undefined.

In A the probability of getting $100 is 30/90, in D it is 60/90. B and C is just trying your luck. — Preceding unsigned comment added by 94.197.127.135 (talk) 09:01, 23 June 2013 (UTC)Reply

Unexplained conclusion

edit

"So, supposing you prefer Gamble A to Gamble B, it follows that you will also prefer Gamble C to Gamble D."

Why does this follow? Based on this false assumption, you will come to the "paradox".

To understand the "paradox" it is important to understand the assumed conclusion. — Preceding unsigned comment added by 94.197.127.135 (talk) 09:09, 23 June 2013 (UTC)Reply

Alternative illustration of the paradox

edit

I would propose that the illustration of the paradox be changed to the one described here: http://ocw.mit.edu/courses/economics/14-123-microeconomic-theory-iii-spring-2010/lecture-notes/MIT14_123S10_notes06.pdf — Preceding unsigned comment added by Bquast (talkcontribs) 09:03, 3 October 2013 (UTC)Reply

The statement of the problem is far from sufficiently clear

edit

Are you told both bets at the same time? If not, do you complete the first draw before being informed of the second bet? If you are told sequentially, you might choose A out of risk aversion, and then make your choice on the second bet influence by framing effects created by the first decision (Kahneman's book discusses this).

Furthermore, if you are told the bets sequentially, there remains the hypothesis that the person giving you the wagers is behaving sharply (i.e. trying to psych you out). Since you don't know the second bet is coming, when the first bet is offered, if you are defending yourself against sharp behaviour, you'll assume there are no black balls and pick A. The sharp operator will, of course, have placed 60 black balls in the urn, expecting you to pick A. Then when the second offer is posed, you can at most win $100 total. Defending yourself against sharp thinking you'll pick D (by now realizing that you were tricked into not taken the best choice on the first bet). In general, you'll always suspect that the person making the offer is, at that step in the game, one step ahead of the chooser, and you'll make the picks viewed as most immune from this disadvantage.

Additionally, for mathematical rigour, in the case where the second offer is made after a choice is taken on the first offer, whether the second offer posed can be influenced by either the choice/selected ball from the the first offer, otherwise explanations invoking an analysis of the psychology of the person posing the offers is insufficiently posed.

Finally, for completeness is really ought to be stated whether the chooser is left wondering whether there might possibly be a third offer on the same urn.

In the case where you are told that there are two bets only, and you get to make both your selections in tandem, a rational analysis remains subject to your utility function for the different pay-offs, which can reasonably be linear, concave, or convex depending on your personal circumstance (consider The Gift of the Magi by O. Henry). This does not necessarily have anything to do with risk aversion. — MaxEnt 19:17, 6 March 2015 (UTC)Reply

Citations and Illustration

edit

Hey, I'm currently working - together with a fellow student - on the German version of the Article for a seminar. (Our current Version can be found here) For this I created some graphics which could be used to vizualize the 2 Versions of the Ellsberg Paradox. If I have time at the end of the semester I could also add some of the stuff we have written for the German Wiki here.
Another thing: The section "Possible explanations" is pretty much taken from Lima Filho, Roberto IRL (July 2, 2009). "Rationality Intertwined: Classical vs Institutional View" Available at SSRN 2389751: pp. 5–6. doi:10.2139/ssrn.2389751. Thats why I added a citation at the End of the section, but basically it covers the whole section (and also I'm not sure if the Ben-Haim, Yakov (2006) citation there is appropriate, the original paper doesn't have it). best --Daimpi (talk) 19:29, 28 June 2015 (UTC)Reply

Utility function explanations seem to be completely wrong

edit

The experimentally observed choices have a rather mundane explanation — provided the utility function is concave, and the utility function is applied to expectations. It is easy to see in a simplified version of the game:

  • There are two urns with 6 balls each, one with RRBBBY balls, the other with RRBYYY.
  • You choose one urn (at random: they are randomized before you choose).
  • After this you are given 2 choices: $300 for R, or $300 for B.

In the first case the utility of the expected gain is U($100), in the second it is either U($50), or U($150); here U is the utility function. With concave U (as usually assumed) the first choice is better.

(This is OR, but as far as I can see, this is more or less identical to formula `(1)` in this result of quick googling. --Ilya-zz (talk) 08:11, 3 April 2021 (UTC)Reply

Withhold from editing briefly

edit

I am a university student completing a Wikipedia editing assignment (Wikipedia Course Link for reference: https://outreachdashboard.wmflabs.org/courses/UQ/ECON3430_2021_(Semester_One_2021)) and I have made the most recent edit to this Wiki Page (26/4/21: 3:56pm). I have made some subtle changes to the wording of the earlier portion of the page and extended slightly upon the 'Decisions under uncertainty aversion section' as well as added an image and Academic Paper section towards the bottom.

The markers viewing the work will review the changes I have made and grade me, I am asking everyone if they could please withhold from making further edits until the 10/5/21 to allow ample time for the tutors to see my version of the work and grade me.

Regards. — Preceding unsigned comment added by WHn457 (talkcontribs) 06:13, 26 April 2021 (UTC)Reply

That's a terrible suggestion. An encyclopedia shouldn't care about your course. Also, your tutors can just look at the version that you edited, no need to stop changing it afterwards. — Preceding unsigned comment added by 141.70.3.46 (talk) 06:07, 2 July 2021 (UTC)Reply

Templates

edit

This article was flagged for tone in December 2010 with this edit, though the OP has given no indication what the problem is. Anyway, it looks OK to me, so I have removed it. If anyone feels the tone is still an issue, they should replace the template, and say what (in their opinion) is wrong with it. I trust this is OK with everyone. Moonraker12 (talk) 15:32, 27 July 2022 (UTC)Reply

Possible corrections to final para

edit

The final para contains two self-contradictory statements. "The work was made public in 2001, some 40 years after being published" and "The book is considered a highly-influential paper". To be made public means the same as to be published. A book can't normally be considered a paper. I don't know what the writer is trying to say here. Could someone who does know, suggest a more sensible way of formulating it? Andy Denis 12:43, 17 June 2023 (UTC) Andy Denis 12:43, 17 June 2023 (UTC) — Preceding unsigned comment added by Andy Denis (talkcontribs)