Talk:Memorylessness

Latest comment: 6 years ago by MaxEnt in topic Excess exposition in lead

Notes link is broken! — Preceding unsigned comment added by 151.66.156.126 (talk) 09:02, 13 January 2015 (UTC)Reply

How to prove that exponential distribution is the ONLY distribution that has the memoriless property?

That is an excellent question, and at some point soon I'll add something to the article on this point. Here's the quick version of the answer:
Let G(t) = Pr(X > t).
Then basic laws of probability quickly imply that G(t) gets smaller as t gets bigger. The memorylessness of this distribution is expressed as
Pr(X > t + s | X > t) = Pr(X > s).
By the definition of conditional probability, this implies
Pr(X > t + s)/Pr(X > t) = Pr(X > s).
Thus we have the functional equation
G(t + s) = G(t) G(s)
AND we have the fact that G is a monotone decreasing function.
The functional equation alone will imply that G restricted to rational multiples of any particular number is an exponential function. Combined with the fact that G is monotone, this implies G on its whole domain is an exponential function.
That's a bit quick and hand-waving, but the detailed proof can be reconstructed from it. Michael Hardy 00:00, 13 November 2005 (UTC)Reply

independence?

edit

question: considering discrete-time processes, would the states be independent if the distribution of the values was Exponetial (or Geometric)? thanks, Akshayaj 19:56, 20 June 2006 (UTC)Reply

Regarding discrete and continuous stochastic processes, or indexed sets of random variables Xi, geometric and exponential distributions routinely model derived random variables rather than those Xi which constitute the process. For example, random waiting time, perhaps call it W, is the count of time points (for a discrete-time process) or the length of time interval (for a continuous-time process) before or between some occurrence(s) in the process Xi.
Any sequence of geometric random variables Xi for all   is an infinite discrete-time stochastic process. Any sequence of exponential random variables Xi for all   is a finite discrete-time stochastic process. Those statements are true of any probability distributions, however, and they are silent regarding memorylessness or independence or stationarity.
A sequence of exponential random variables Xi for all  , to continue one example, may be called n observations of exponentially distributed data. If there is no further structure then it may not be fruitful to call the collection a discrete-time stochastic process in [0,infinity) --which is the range of an exponential r.v.-- but formally it is such a process.
Last hour I rewrote the body of this article in terms of values m and n in a discrete index set and values t and s in a continuous index set, but retaining X in both cases for the random variable whose "memory" is under discussion. Those Xs are r.v. derived from the stochastic processes, if any. That is, they are functions of the Xs in this note. --P64 (talk) 21:08, 4 March 2010 (UTC)Reply

need help

edit

In the lead section I have put a suggestive but shallow second paragraph in place of the following.

wherein any derived probability from a set of random samples is distinct and has no information (i.e. "memory") of earlier samples.

What does that mean?

In order to improve this article much, we need commitment whether the subject pertains to probability or to something modeled by probability (perhaps a trial or a process or "sampling") or to both. This category error or category ambiguity plagues some neighboring articles, perhaps all of the articles with "process" in the title? --P64 (talk) 21:33, 4 March 2010 (UTC)Reply

Ambiguity

edit

Memoryless has 2 different meanings that evolved separately in different branches of probability; one the Markov property , second the one related to conditional expectation when the distribution is applied to stopping time. We need to make sure both uses of the term are not mixed up Limit-theorem (talk) 10:50, 21 June 2013 (UTC)Reply

As far as I can tell, they are essentially the same - that the marginal probability is equal to the general probability of the outcome from current state. Just in a stochastic process, P(A) is meaningless, it has to be P(A|state). Markov memorylessness is simply that P(A|state&previous-states) is equal for all values of previous-states. SamBC(talk) 16:35, 17 June 2014 (UTC)Reply

Excess exposition in lead

edit

In contrast, let us examine a situation which would exhibit memorylessness.

Imagine a long hallway, lined on one wall with thousands of safes.

Each safe has a dial with 500 positions, and each has been assigned an opening position at random.

Imagine that an eccentric person walks down the hallway, stopping once at each safe to make a single random attempt to open it.

In this case, we might define random variable X as the lifetime of their search, expressed in terms of "number of attempts the person must make until they successfully open a safe".

In this case, E[X] will always be equal to the value of 500, regardless of how many attempts have already been made.

Each new attempt has a (1/500) chance of succeeding, so the person is likely to open exactly one safe sometime in the next 500 attempts — but with each new failure they make no "progress" toward ultimately succeeding.

Even if the safe-cracker has just failed 499 consecutive times (or 4,999 times), we expect to wait 500 more attempts until we observe the next success.

If, instead, this person focused their attempts on a single safe, and "remembered" their previous attempts to open it, they would be guaranteed to open the safe after, at most, 500 attempts (and, in fact, at onset would only expect to need 250 attempts, not 500).

Obviously, it's less efficient to repeat failure, on the supposition of a fixed function. In my opinion, this narrative really adds little more than that. Note that counting has the magic property of being able to encode your entire history in a compact integer (as opposed to picking a set of contiguously numbered balls out of a bubble-headed Dalek). — MaxEnt 17:21, 25 March 2018 (UTC)Reply