Readings: Manning and Schutze, Section 2.1
Intuitively, probability is a measure of certainty about a certain outcome. For instance, if we toss a coin, we expect it to end up heads half the time. When we roll a die with 6 numbers, we expect to get a 6 one time out of six. The probability of a coin coming up heads is .5 and the probability of a die coming up 4 is 1/6. Something that it certain has a probability 1, whereas something that is impossible has a probability 0.
There are two philosophical ways of viewing probability: In the frequentist view probability is a ratio between two sets of events. For instance, given the set of coin tosses, we can consider the subset of tosses that come up heads. Thus the probability of getting heads is
(# heads) / (# tosses).
Given a set of observations, we can compute probabilities.
In the subjective view probability is a degree of belief that a certain proposition is true (or event will occur). This might not be based on any prior observations - it’s an indication of strength of belief. We might believe the probability that the next coin toss will come up heads is .5 (even through we obviously have never seen the next coin toss before!). For instance, a scientist might assert that the probability of a large comet hitting the earth in the next 100 years is .001 (even though the scientist has no record of such a comet ever hitting the earth).
It took more than three centuries to develop a mathematical theory of probability that did a reasonable job of capturing peoples different views and intuitions. Modern probability theory started in the 1930's with work by Kolmogorov. There are more thorough treatments of the following concepts available on the web.
To develop a theory of probability we need to define the types of events that can occur and what it means to talk about the chance of some event occurring. Here we will be developing the model for discrete probability distributions, which are the most intuitive and useful for natural language applications.
We can think of probability theory as formalizing the notion of performing experiments. Say we have an experiment that has a certain number of possible outcomes. We can perform the experiment repeatedly and note what the outcomes are. For instance, we might do an experiment involving tossing a coin, and the possible outcomes would be heads of tails. This can be formalized by introducing the notion of a random variable, which can range over a set of outcomes (called the sample space). For example, we could represent coin tossing by a random variable TOSS, which has as its sample space the set {heads, tails}. Note that we are currently assuming that each toss of the coin is independent of the tosses that occurred before. Later on we will look at models where such independence assumptions do not hold.
We can now introduce the notion of a discrete probability distribution. For any random variable X with finite (or countable) sample space {x1, ..., xn}, a discrete probability distribution P is a function that satisfies three key properties:
(1) 0 <=
P(X = xi) <= 1 for all i
(2) Si P(X=xi) = 1
(3) P(X in {y1, .., ym}) = Si P(X=yi)
The first two properties simply constrain values to be between 0 and 1, where the total probability over the entire sample space equals 1. The third property, called the countable additivity property, is the fundamental property that makes the mathematics work. These three properties are called the Kolmogorov axioms.
Note we have not said exactly how to compute the probability distribution. There are many possible functions that could work. For instance, we could assume that each outcome is equally likely and have the uniform distribution. In a sample space of size N, each outcome would have a probability of 1/N. A very useful distribution is called the Bernoulli distribution, which is used for sample spaces of size 2. There is one parameter to specify in this distribution, p, and the probability of one outcome is p and the other is (1 - p). In natural language processing, however, we generally don't know the distributions as a mathematical function, but must estimate them from a set of observations. In this case, the distributions are constructed by counting the number of times each element of the sample space occur in a large series of experiments. We will talk about estimation procedures at some length later in the course. Today, however, lets consider a simple case where we have a finite set of experiments that occurred and there will be no more. Specifically, consider a particular horse Harry, who ran 100 races in his career (these are the experiments) and has now passed away. We have developed a time machine that can take us back to a random race where we can bet. There is no way to identify the particular date of the race, however, so we never know which particular race will be “run” today.
The results of the race can be captured by a random variable R, with sample space {win,lose}. Say Harry won 20 of the races, then we could say
P(R=win) =
20/100 = .2
P(R=lose) = 80/100 = .8
Note that these values satisfy the requirements of a probability function. If this is all we know, when we go to bet, we should bet on the assumption that there is only a 20% chance that Harry will win.
Of course, to represent richer situations, we will need many different random variables. For instance, in modeling horse races we might care what the weather was like at each race (with a random variable W with values rain & shine). W is a probability distribution in its own right. But of more interest to us is how the two interact. To model this we talk of a joint probability distribution, which is a new probability distribution that combines the two random variables R and W. This joint probability would have four values, corresponding to
<win, rain>, <lose, rain>, <win, shine>, <lose, shine>
Say that it rained in 30 of Harry’s races, and he won 15 of them. With this information and the above information about his overall wins and losses, we can figure out the following joint probabilities:
P(R=win, W=rain)
= 15/100 = .15
P(R=win, W=shine) = 5/100 = .05
P(R=lose, W=rain) = 15/100 = .15
P(R=lose, W=shine) = 65/100 = .65
Note that the joint probability distribution obeys the three conditions for a probability distribution.
If you are given a joint distribution, then you can derive the probability distributions for each single random variable (now called the marginal probability) by the following formulas, which should fit your intuition.
P(R=win) =
P(R=win, W in {rain, shine}) = P(R=win, W=rain) + P(R=win, W=shine)
P(R=lose) = P(R=lose, W in {rain, shine}) = P(R=lose, W=rain) + P(R=lose,
W=shine).
If we are betting on Harry, we are more interested in how we expect him to do based on certain conditions that we know to be the case. For instance, if it’s raining, what is the probability that Harry will win? Intuitively, it 15/30, or .5 (a big change from .2, his overall chance of winning). Conditional probability is a theory of how to model these intuitions.
Technically, the conditional probability is written as P(A | B), and defined as follows:
P(A | B) = P(A, B) / P(B)
Note that P(R=win | W=rain) = P(R=win, W=rain)/P(W=rain) = 15/30 = .5 as we expect. Note also that when we write an equation just mentioning the random variables, this is really an abbreviation for a universal quantification over all the values of the random variables. Thus, if random variable A has values ai, and B has values bj, then saying P(A | B) = P(A, B) / P(B) really means
P(A=ai | B=bj) = P(A=ai, B=bj)/P(B=bj) for all i, j
Almost all our knowledge about the world is in the form of conditional probabilities, i.e., it is relative to a certain context. If the above examples were more realistic, they would all be conditional on as many other variables as we use to represent the world. For instance, we might represent what Harry had for lunch, or what Harry's current health is, or how Harry did in his last race. One the key issues in formulating any problem is selecting what random variables to use, and when we formulate knowledge, knowing what the dependencies are. For example, say we also represent when we got up in the morning, and how we got to the racetrack. Except for a few people who believe in lucky omens, most of us believe that when got up and how we got to the racetrack doesn't affect how Harry performs in the race. This is the notion of independence. Two random variables are independent if the probability distribution of one isn't affected by the values of the other. For example, using R as the random variable for Harry winning or not, and G the random variable that states whether we drove or walked to the racetrack, we believe
P(R | G) = P(R)
i.e., the probability of Harry winning is independent of how we got to the racetrack. Making independence assumptions is critical in building probabilistic models for it allows us to ignore certain information that is not relevant.
An equivalent way to define independence relates the joint distribution to the marginal distributions. In particular, random variables X and Y are independent if
P(X, Y) = P(X) * P(Y)
In other words, the probability of X=a and Y=b together is simply the product of the probabilities of X=a and Y=b. Either one of these definitions of independence can be used to derive the other, so they are formally equivalent.
Clearly in our racing example, R and W above are not independent, as it is much more likely for Harry to win if its is raining. But lets introduce another random variable A which captures whether I attend the race (track) or not (home). Say I go to the track 60 times, and Harry wins 12 races. This means that the probability that Harry wins if I attend the race is 12/60 = .2. The same as if I don’t attend, and the same probability over all races. thus we can prove that the random variable R is independent of the random variable A
P(R=win |
A=track) = 12/60 = .2 = P(win)
P(R=win | A=home) = 8/40 = .2 = P(win)
P(R=lose | A=track) = 48/60 = .8 = P(lose)
P(R=lose | A=home) = 32/40 = .8 = P(lose)
i.e., we have shown that
P(R | A) = P(R).
Note that just because A and R are independent doesn’t mean that going to the races doesn’t effect how you bet! It might combine with other random variables the weather. For instance, it might be that I went to the track twelve times when it rained, and Harry won every race! This would indicate that Harry was certain to win when I attended and it was raining. Now we might not believe this correspondence, but computing from the data, it is true. Thus, the random variable R is independent of A, but not independent of A and W.
Since conditional probabilities are so important, we will spent some time developing some simple rules for manipulating them.
One simple theorem that is very useful allows us to compute certain conditional probabilities when we know other ones. This is called Bayes Rule and is written as
P(A | B) = (P(B | A) * P(A)) / P(B)
As an example, say we don’t know all the information about Harry’s races, but we do know the probability that he wins any races (.2), the probability that he wins when its raining (.5), and the probability that it rains (.3). We can calculate the probability that it rained on a day when Harry won a race:
P(rain | win) =
(P(win | rain) * P(rain)) / P(win)
= (.5 * .3) / .2
= .75
Looking back at the
full joint distribution, we can see that this is the right result. In
particular, if we fill in the counts for each event in a table as follows
Joint Distribution count |
R=win |
R=lose |
Marginal counts for W |
W=rain |
15 |
15 |
30 |
W=shine |
5 |
65 |
70 |
Marginal counts for R |
20 |
80 |
100 |
we see that we can get P(rain | win) = .75 by dividing 15 (the count for <win, rain>) by 20 (the count for the marginal <win>). In situations where we can fill in a table like this one, we can compute the conditional, marginal and joint probabilities directly. But in most cases, the number of random variables and/or the number of values for these variables makes building such a table impossible. Bayes rule is used extensively in a wide range of applications to compute the probabilities we want to have from probabilities that we can obtain more easily.
Another very useful rule, which computes joint probabilities from conditional probabilities, is called the chain rule (Shown for a set of random variables A to Z):
P(A ... Z) = P(A | B ...Z) * P(B | C ... Z) ... P(Y | Z) * P(Z)
To see why this theorem holds, consider the three variable case, which would claim
P(A, B, C) = P(A | B, C) * P(B | C) * P(C)
If we expand out the conditional probabilities with their definitions, we get
P(A, B, C) = P(A, B, C) * P(B, C) *
P(C)
P(B, C) P(C)
When written this
way, we see that each terms numerator cancels the previous terms denominator,
leaving us with a simple expression that P(A, B, C) equals itself. The Chain
rule we be important when we start to try to estimate the probability
distributions of data.
For typical applications in natural language processing, we are interested in sequences of the events, such as the parts of speech of a series of words, or the sequence of words that best accounts for a series of acoustic events. To model such phenomena, we use sequences of random variables, with an index indicating the time. For instance, if we wanted to model three tosses of a coin in a row, we would have three random variables T1 T2 T3. When the probability distribution for Ti does not change over time, it is called a Stationary Stochastic Process. These are the very useful for many applications. For instance, the coin tossing example can be modeled as a stationary stochastic process because we know the probability distributions of each ti is independent of all the others, namely P(Ti = h) = .5, for all i. Thus, it is easy to compute the probability of a sequence, say of H H H. We simply multiply the individual probabilities together:
P(H1=h, H2=h, H3=h) = P(H1=h) * P(H2=h) * P(H3=h) = .53 = .125
In general, however, the probability distribution of the i’th event might depend on many of the events that precede it.
There is a very useful class of Stochastic Processes that falls between these complete independence case and the last example. In this class, the probability distribution for Ui depends only on what happened at one previous case, i.e., on Ui-1. Such models are have the Markov property, and they will be one of the main tools for a host of NLP applications. Markov models are useful because they are powerful enough to often produce reasonable models of behavior, but simple enough to allow for efficient algorithms for finding probabilities of sequences. While you might not think so at first, Markov models are also stationary processes as the probability distribution at each time step can be modeled by the conditional probability distribution P(Ui | Ui-1).
Note we can define a hierarchy of Markov models where the current distribution depends only on k previous events. As k increases, however, the computational burden of computing over them increases correspondingly.
Not all stochastic processes are stationary, there’s a traditional among mathematicians to use examples of taking balls from urns. Let’s say we have an urn containing one red ball and one blue. We choose a ball from the urn, and then we put it back with another one of the same color. We model this process with a series of random variables Ui, but they are not independent of each other. To see this, consider the first two turns. For the first selection, clearly P(U1=red) = P(U2=blue)=.5. But now consider the probability distribution for U2. If we picked a red and step one, we now have a probability of 2/3 of picking red at step two. Similarly, if we picked a blue ball at step one, there is only a 1/3 probability of selecting red at step two. In other words,
P(U2=red | P(U1=red) = 2/3
P(U2=red | P(U1=blue) = 1/3
The probabilities for P(U2=blue) are easily derived from the above. For the third drawing, the probability that P(U3=red) depends on what happened in the previous two tosses, and in general, for the n’th turn, the probability depends on the n-1 previous turns. Clearly, it is non trivial to compute the probability of a sequence such as red, red, blue, red.
To use probability theory for natural language applications, we need to know what the probability distributions are for language. Unfortunately, we don't know what these distributions are. Rather, we have to estimate the distributions from looking at data, which we will discuss next lecture.
The other big issue, however, is what form of probability distribution works best for particular applications. In general, we will always be trying to balance a tradeoff between the accuracy to which a probability distribution can model the phenomena versus the amount of data we would require to adequately estimate it. Today, let us consider one simple example. Say we want to build a program using probability distributions that identifies the part of speech for words in a corpus - a part-of-speech tagger.
The simplest probability model for this problem would introduce one random variable C with its set of outcomes being the part of speech tags. In this case, we assume that the probability distribution, Ci, is simply equal to C, i.e.,
P(Ci) = P(C)
Given this model, a the best tagger we could build would simply pick the most likely tag, namely a noun, for each word, and we could expect to get about at 33% accuracy.
We can do much better by using a conditional probability model, where the probability of the tag is conditional on what the word is. If we let Wi be a series of random variables ranging over all the words, the we need to estimate the distribution for P(Ci | Wi). Again, we simplify the model by assuming a these probabilities are independent of their position in the text, i.e., there are two random variables C and W such that
P(Ci | Wi) = P(C | W) for all i
Given this model, the tagger would pick the most likely part of speech for each word (e.g., eat is most likely a V, hat is most likely a N, etc). Surprisingly, such models can obtain about a 90% accuracy in tagging tests. As a result, this model serves as a good baseline model, one which we can compare new models to.
An even better model can be obtained with a model that depends not only on the current word, but also the tag of the preceding word, i.e., P(Ci | Wi, Ci-1). These are often called bigram models. For part of speech tagging, approaches based on this model can obtain about 95% accuracy for English.