Clever Geek Handbook
📜 ⬆️ ⬇️

Independence (probability theory)

In probability theory, two random events are called independent if the occurrence of one of them does not change the probability of the occurrence of the other. Similarly, two random variables are called independent if the known value of one of them does not provide information about the other.

Content

  • 1 Independent events
  • 2 Independent Sigma Algebras
  • 3 Independent random variables
    • 3.1 Definitions
    • 3.2 Properties of independent random variables
    • 3.3 n-ary independence
  • 4 See also

Independent Events

We assume that a fixed probability space is given(Ω,F,P) {\ displaystyle (\ Omega, \; {\ mathcal {F}}, \; \ mathbb {P})} (\Omega ,\;{\mathcal {F}},\;\mathbb {P} ) .

Definition 1. Two eventsA,B∈F {\ displaystyle A, B \ in {\ mathcal {F}}} A,B\in {\mathcal {F}} independent if

Probability of occurrenceA {\ displaystyle A} A does not change the probability of an eventB {\ displaystyle B} B .

Remark 1. In the event that the probability of one event, let’s sayB {\ displaystyle B} B nonzero, i.e.P(B)>0 {\ displaystyle \ mathbb {P} (B)> 0} \mathbb {P} (B)>0 , the definition of independence is equivalent to:

P(A∣B)=P(A),{\ displaystyle \ mathbb {P} (A \ mid B) = \ mathbb {P} (A),} \mathbb {P} (A\mid B)=\mathbb {P} (A),

i.e. conditional probability of an eventA {\ displaystyle A} A providedB {\ displaystyle B} B equal to the unconditional probability of the eventA {\ displaystyle A} A .

Definition 2. Let there be a family (finite or infinite) of random events{Ai}i∈I⊂F {\ displaystyle \ {A_ {i} \} _ {i \ in I} \ subset {\ mathcal {F}}} \{A_{i}\}_{i\in I}\subset {\mathcal {F}} whereI {\ displaystyle I} I Is an arbitrary index set. Then these events are pairwise independent if any two events from this family are independent, i.e.

P(Ai∩Aj)=P(Ai)⋅P(Aj),∀i≠j.{\ displaystyle \ mathbb {P} (A_ {i} \ cap A_ {j}) = \ mathbb {P} (A_ {i}) \ cdot \ mathbb {P} (A_ {j}), \; \ forall i \ neq j.} \mathbb {P} (A_{i}\cap A_{j})=\mathbb {P} (A_{i})\cdot \mathbb {P} (A_{j}),\;\forall i\neq j.

Definition 3. Let there be a family (finite or infinite) of random events{Ai}i∈I⊂F {\ displaystyle \ {A_ {i} \} _ {i \ in I} \ subset {\ mathcal {F}}} \{A_{i}\}_{i\in I}\subset {\mathcal {F}} . Then these events are jointly independent if, for any finite set of these events{Aik}k=oneN {\ displaystyle \ {A_ {i_ {k}} \} _ {k = 1} ^ {N}} \{A_{i_{k}}\}_{k=1}^{N} right:

P(Aione∩...∩AiN)=P(Aione)⋅...⋅P(AiN).{\ displaystyle \ mathbb {P} (A_ {i_ {1}} \ cap \ ldots \ cap A_ {i_ {N}}) = \ mathbb {P} (A_ {i_ {1}}) \ cdot \ ldots \ cdot \ mathbb {P} (A_ {i_ {N}}).} \mathbb {P} (A_{i_{1}}\cap \ldots \cap A_{i_{N}})=\mathbb {P} (A_{i_{1}})\cdot \ldots \cdot \mathbb {P} (A_{i_{N}}).

Remark 2. Joint independence obviously implies pairwise independence. The converse is generally not true.

Example 1. Let three balanced coins be thrown. We define the events as follows:

  • Aone{\ displaystyle A_ {1}} A_{1} : coins 1 and 2 fell on the same side;
  • A2{\ displaystyle A_ {2}} A_{2} : coins 2 and 3 fell on the same side;
  • A3{\ displaystyle A_ {3}} A_{3} : coins 1 and 3 fell on the same side;

It is easy to verify that any two events in this set are independent. Yet the three are collectively dependent, for knowing, for example, that eventsAone {\ displaystyle A_ {1}}   andA2 {\ displaystyle A_ {2}}   occurred, we know for sure thatA3 {\ displaystyle A_ {3}}   also happened. More formally:P(Ai∩Aj)=onefour=one2⋅one2=P(Ai)⋅P(Aj)∀i≠j {\ displaystyle \ mathbb {P} (A_ {i} \ cap A_ {j}) = {\ frac {1} {4}} = {\ frac {1} {2}} \ cdot {\ frac {1} {2}} = \ mathbb {P} (A_ {i}) \ cdot \ mathbb {P} (A_ {j}) \ quad \ forall i \ neq j}   . On the other hand,P(Aone∩A2∩A3)=onefour≠one2⋅one2⋅one2=P(Aone)⋅P(A2)⋅P(A3) {\ displaystyle \ mathbb {P} (A_ {1} \ cap A_ {2} \ cap A_ {3}) = {\ frac {1} {4}} \ neq {\ frac {1} {2}} \ cdot {\ frac {1} {2}} \ cdot {\ frac {1} {2}} = \ mathbb {P} (A_ {1}) \ cdot \ mathbb {P} (A_ {2}) \ cdot \ mathbb {P} (A_ {3})}   .

Independent Sigma Algebras

Definition 4. LetAone,A2⊂F {\ displaystyle {\ mathcal {A}} _ {1}, \; {\ mathcal {A}} _ {2} \ subset {\ mathcal {F}}}   two sigma-algebras on the same probability space. They are called independent if any of their representatives are independent among themselves, that is:

P(Aone∩A2)=P(Aone)⋅P(A2),∀Aone∈Aone,A2∈A2{\ displaystyle \ mathbb {P} (A_ {1} \ cap A_ {2}) = \ mathbb {P} (A_ {1}) \ cdot \ mathbb {P} (A_ {2}), \; \ forall A_ {1} \ in {\ mathcal {A}} _ {1}, \; A_ {2} \ in {\ mathcal {A}} _ {2}}   .

If instead of two there is a whole family (possibly infinite) of sigma-algebras, then pairwise and joint independence is determined for it in an obvious way.

Independent Random Variables

Definitions

Definition 5. Let a family of random variables be given.(Xi)i∈I {\ displaystyle (X_ {i}) _ {i \ in I}}   , so thatXi:Ω→R,∀i∈I {\ displaystyle X_ {i} \ colon \ Omega \ to \ mathbb {R}, \; \ forall i \ in I}   . Then these random variables are pairwise independent if the sigma-algebras generated by them are pairwise independent{σ(Xi)}i∈I {\ displaystyle \ {\ sigma (X_ {i}) \} _ {i \ in I}}   . Random variables are independent in the aggregate , if such are the sigma-algebras generated by them.

It should be noted that in practice, if this is not taken out of context, it is considered that independence means independence in the aggregate .

The definition given above is equivalent to any of the following. Two random variablesX,Y {\ displaystyle X, \; Y}   independent if and only if :

  • For anyA,B∈B(R) {\ displaystyle A, \; B \ in {\ mathcal {B}} (\ mathbb {R})}   :
P(X∈A,Y∈B)=P(X∈A)⋅P(Y∈B).{\ displaystyle \ mathbb {P} (X \ in A, \; Y \ in B) = \ mathbb {P} (X \ in A) \ cdot \ mathbb {P} (Y \ in B).}  
  • For any Borel functionsf,g:R→R {\ displaystyle f, \; g \ colon \ mathbb {R} \ to \ mathbb {R}}   random variablesf(X),g(Y) {\ displaystyle f (X), \; g (Y)}   are independent.
  • For any limited Borel functionsf,g:R→R {\ displaystyle f, \; g \ colon \ mathbb {R} \ to \ mathbb {R}}   :
E[f(X)g(Y)]=E[f(X)]⋅E[g(Y)].{\ displaystyle \ mathbb {E} \ left [f (X) g (Y) \ right] = \ mathbb {E} \ left [f (X) \ right] \ cdot \ mathbb {E} \ left [g ( Y) \ right].}  

Properties of Independent Random Variables

  • Let bePX,Y {\ displaystyle \ mathbb {P} ^ {X, \; Y}}   - distribution of a random vector(X,Y) {\ displaystyle (X, \; Y)}   ,PX {\ displaystyle \ mathbb {P} ^ {X}}   - distributionX {\ displaystyle X}   andPY {\ displaystyle \ mathbb {P} ^ {Y}}   - distributionY {\ displaystyle Y}   . ThenX,Y {\ displaystyle X, \; Y}   independent if and only if
PX,Y=PX⊗PY,{\ displaystyle \ mathbb {P} ^ {X, \; Y} = \ mathbb {P} ^ {X} \ otimes \ mathbb {P} ^ {Y},}  

Where⊗ {\ displaystyle \ otimes}   denotes the (direct) product of measures .

  • Let beFX,Y,FX,FY {\ displaystyle F_ {X, \; Y}, \; F_ {X}, \; F_ {Y}}   - cumulative distribution functions(X,Y),X,Y {\ displaystyle (X, \; Y), \; X, \; Y}   respectively. ThenX,Y {\ displaystyle X, \; Y}   independent if and only if
FX,Y(x,y)=FX(x)⋅FY(y).{\ displaystyle F_ {X, \; Y} (x, \; y) = F_ {X} (x) \ cdot F_ {Y} (y).}  
  • Let random variablesX,Y {\ displaystyle X, \; Y}   discrete . Then they are independent if and only if
P(X=i,Y=j)=P(X=i)⋅P(Y=j).{\ displaystyle \ mathbb {P} (X = i, \; Y = j) = \ mathbb {P} (X = i) \ cdot \ mathbb {P} (Y = j).}  
  • Let random variablesX,Y {\ displaystyle X, \; Y}   together are absolutely continuous, i.e. their joint distribution has a densityfX,Y(x,y) {\ displaystyle f_ {X, \; Y} (x, \; y)}   . Then they are independent if and only if
fX,Y(x,y)=fX(x)⋅fY(y),∀(x,y)∈R2{\ displaystyle f_ {X, \; Y} (x, \; y) = f_ {X} (x) \ cdot f_ {Y} (y), \; \ forall (x, \; y) \ in \ mathbb {R} ^ {2}}   ,

WherefX(x),fY(y) {\ displaystyle f_ {X} (x), \; f_ {Y} (y)}   - density of random variablesX {\ displaystyle X}   andY {\ displaystyle Y}   respectively.

  • Let random variablesX,Y {\ displaystyle X, \; Y}   - independent and have the final dispersion . Then they are not correlated .
  • Any set of randomly independent random variables is pairwise independent, but not all pairwise independent sets are independent in the aggregate. The latter demonstrates an example with a coin toss , cited by Bernstein S. N.

n-ary independence

Generally for anyn⩾2 {\ displaystyle n \ geqslant 2}   can talk aboutn {\ displaystyle n}   -ar independence. The idea is similar: a family of random variables isn {\ displaystyle n}   -ar independent if any subset of its powern {\ displaystyle n}   is independent in aggregate.n {\ displaystyle n}   -ary independence was used in theoretical computer science to prove the theorem on the MAXEkSAT problem.

See also

  • Work
  • Tonelli-Fubini Theorem
  • Borel's Lemma - Cantelli
  • The law of zero or Kolmogorov unit
  • Copula
Source - https://ru.wikipedia.org/w/index.php?title=Independence_(probability_ theory )&oldid = 90416228


More articles:

  • Green, Edward Lee
  • Elrond
  • Luxury Squire
  • Communist Party of Argentina (Extraordinary Congress)
  • East Utah College Prehistoric Museum
  • Sadalsky, Stanislav Yurievich
  • Proprioreceptor
  • Honey
  • Fabricant, Lev Borisovich
  • WHQL

All articles

Clever Geek | 2019