7A Calculo De Probabilidades - JulTob/Mathematics GitHub Wiki
🃏 Probability & Statistics
Uncertainty
♠️ Basic Probability, Combinatorics
- ♠️ The hidden order in randomness.
♣️ Distributions & Data
- ♣️ Extracting truth from numbers.
♥️ Stochastic Processes
- ♥️ Markov chain puzzles, random walks, gambler’s ruin scenarios.
♦️ Bayesian Statistics
- ♦️ Updating beliefs with evidence.
1. Random Experience (La experiencia del azar)
- Random experiment: Any process or action that leads to different possible outcomes, such as rolling a die or flipping a coin. The key feature is that the result cannot be predicted with certainty.
- Sample space (Espacio muestral): The set of all possible outcomes. For example, for a coin toss, the sample space is {Heads, Tails}.
2. Mathematical Model of Probability (El modelo matemático de la probabilidad)
- Probability function: A function that assigns a probability to each event (a subset of the sample space), which must satisfy:
- $( P(E) \geq 0 )$ for any event $( E )$.
- $( P(\Omega) = 1 )$, where $( \Omega )$ is the entire sample space.
- Additivity: For mutually exclusive events $( E_1, E_2, \ldots )$, $( P(E_1 \cup E_2 \cup \dots) = P(E_1) + P(E_2) + \dots )$.
3. Probability Assignment (Asignación de probabilidades)
- Classical Probability: If all outcomes in the sample space are equally likely, the probability of an event $( E )$ is:
P(E) = \frac{\text{Number of favorable outcomes for } E}{\text{Total number of possible outcomes}}
- Frequentist Probability: Interprets probability as the long-run relative frequency of an event occurring after many trials.
- Subjective Probability: Based on personal belief about how likely an event is.
4. Inclusion-Exclusion Principle (Fórmulas de inclusión-exclusión)
- Used to calculate the probability of the union of overlapping events:
P(A \cup B) = P(A) + P(B) - P(A \cap B)
- Generalized for more than two events.
5. Extensions of the Model (Extensiones del modelo matemático)
- Involves adding more complex structures to the basic probability model to deal with multiple experiments, combinations of events, or more elaborate outcome spaces.
6. Conditional Probability (Probabilidad condicionada)
- The probability of an event $( A )$, given that another event $( B )$ has occurred, is calculated as:
P(A | B) = \frac{P(A \cap B)}{P(B)}, \quad \text{provided that } P(B) > 0
- This concept is essential for understanding Bayesian Probability.
7. Independence of Events (Independencia de sucesos)
- Events $( A )$ and $( B )$ are independent if the occurrence of $( A )$ does not affect the probability of $( B )$ (and vice versa):
P(A \cap B) = P(A) \cdot P(B)
8. Random Variables (Variable aleatoria)
- A random variable $( X )$ assigns a numerical value to each outcome of a random experiment.
- Discrete random variables: Take on a finite or countable number of values (e.g., rolling a die).
- Probability mass function (PMF): Describes the probability that a discrete random variable takes a specific value, $( P(X = x) )$.
- Cumulative distribution function (CDF): $( F(x) = P(X \leq x) )$.
9. Mathematical Expectation (Esperanza matemática)
- The expected value $( E(X) )$ of a random variable $( X )$ is the weighted average of all possible values:
E(X) = \sum_{x} x \cdot P(X = x)
- It represents the "long-run average" if the experiment is repeated many times.
10. Descriptive Analysis of Probability Distributions (Análisis descriptivo de las distribuciones de probabilidad)
- Variance $( \text{Var}(X) )$: Measures the spread of a random variable:
\text{Var}(X) = E((X - E(X))^2)
- Standard deviation: The square root of the variance, showing how much the values deviate from the mean.
- Common discrete distributions include the Binomial and Poisson distributions.
11. Repeated Trials (Pruebas repetidas)
- Involves models where experiments are repeated under identical conditions.
- For example, the Bernoulli trial models repeated independent trials with two outcomes (success or failure).
- The Binomial distribution models the number of successes in a fixed number of independent Bernoulli trials.
12. Random Fluctuations (Las fluctuaciones del azar)
- Explores the variability inherent in random processes.
- Law of Large Numbers: As the number of trials increases, the average of the results converges to the expected value.
- Central Limit Theorem: For a large number of trials, the distribution of the sum (or average) of the outcomes tends to follow a normal distribution, even if the original variables are not normally distributed.
Example: Rolling a Die
- Random Experiment: Rolling a fair six-sided die.
- Sample Space: $( \Omega = {1, 2, 3, 4, 5, 6} )$.
- Probability Assignment: $( P(\text{rolling a 3}) = \frac{1}{6} )$.
- Inclusion-Exclusion: $( P(\text{even or } 3) = P({2, 4, 6}) + P({3}) - P(\emptyset) = \frac{3}{6} + \frac{1}{6} = \frac{4}{6} )$.
- Conditional Probability: $( P(\text{roll a 3 | roll odd}) = \frac{P({3})}{P({1, 3, 5})} = \frac{\frac{1}{6}}{\frac{3}{6}} = \frac{1}{3} )$.
- Random Variable: Let $( X )$ be the result of the roll. $( X )$ can take values 1, 2, 3, 4, 5, or 6.
- Expected Value: $( E(X) = \sum_{i=1}^{6} i \cdot P(X = i) = \frac{1+2+3+4+5+6}{6} = 3.5 )$.
That’s a whirlwind summary! If you need more depth on any specific topic, feel free to ask! 🎲📊