20.1  Two Continuous Random Variables
20.1  Two Continuous Random VariablesSo far, our attention in this lesson has been directed towards the joint probability distribution of two or more discrete random variables. Now, we'll turn our attention to continuous random variables. Along the way, always in the context of continuous random variables, we'll look at formal definitions of joint probability density functions, marginal probability density functions, expectation and independence. We'll also apply each definition to a particular example.
 Joint probability density function

Let \(X\) and \(Y\) be two continuous random variables, and let \(S\) denote the twodimensional support of \(X\) and \(X\). Then, the function \(f(x,y)\) is a joint probability density function (abbreviated p.d.f.) if it satisfies the following three conditions:
\(f(x,y)\geq 0\) \(\int_{\infty}^\infty \int_{\infty}^\infty f(x,y)dxdy=1\) \(P[(X,Y) \in A]=\int\int_A f(x,y)dxdy \) where {(X, Y) ∈ A} is an event in the xyplane.
The first condition, of course, just tells us that the function must be nonnegative. Keeping in mind that \(f(x,y)\) is some twodimensional surface floating above the \(xy\)plane, the second condition tells us that, the volume defined by the support, the surface and the \(xy\)plane must be 1. The third condition tells us that in order to determine the probability of an event \(A\), you must integrate the function \(f(x,y)\) over the space defined by the event \(A\). That is, just as finding probabilities associated with one continuous random variable involved finding areas under curves, finding probabilities associated with two continuous random variables involves finding volumes of solids that are defined by the event \(A\) in the \(xy\)plane and the twodimensional surface \(f(x,y)\).
Example 201
Let \(X\) and \(Y\) have joint probability density function:
\(f(x,y)=4xy\)
for \(0<x<1\) and \(0<y<1\). Is \(f(x,y)\) a valid p.d.f.?
Solution
Before trying to verify that \(f(x,y)\) is a valid p.d.f., it might help to get a feel for what the function looks like. Here's my attempt at a sketch of the function:
The red square is the joint support of \(X\) and \(Y\) that lies in the \(xy\)plane. The blue tentshaped surface is my rendition of the \(f(x,y)\) surface. Now, in order to verify that \(f(x,y)\) is a valid p.d.f., we first need to show that \(f(x,y)\) is always nonnegative. Clearly, that's the case, as it lies completely above the \(xy\)plane. If you're still not convinced, you can see that in substituting any \(x\) and \(y\) value in the joint support into the function \(f(x,y)\), you always get a positive value.
Now, we just need to show that the volume of the solid defined by the support, the \(xy\)plane and the surface is 1:
What is \(P(Y<X)\)?
Solution
In order to find the desired probability, we again need to find a volume of a solid as defined by the surface, the \(xy\)plane, and the support. This time, however, the volume is not defined in the \(xy\)plane by the unit square. Instead, the region in the \(xy\)plane is constrained to be just that portion of the unit square for which \(y<x\). If we start with the support \(0<x<1\) and \(0<y<1\) (the red square), and find just the portion of the red square for which \(y<x\), we get the blue triangle:
So, it's the volume of the solid between the \(f(x,y)\) surface and the blue triangle that we need to find. That is, to find the desired volume, that is, the desired probability, we need to integrate from \(y=0\) to \(x\), and then from \(x=0\) to 1:
Given the symmetry of the solid about the plane \(y=x\), perhaps we shouldn't be surprised to discover that our calculated probability equals \(\frac{1}{2}\)!
 Marginal Probability Density Functions

The marginal probability density functions of the continuous random variables \(X\) and \(Y\) are given, respectively, by:
\(f_X(x)=\int_{\infty}^\infty f(x,y)dy,\qquad x\in S_1\)
and:
\(f_Y(y)=\int_{\infty}^\infty f(x,y)dx,\qquad y\in S_2\)
where \(S_1\) and \(S_2\) are the respective supports of \(X\) and \(Y\).
Example (continued)
Let \(X\) and \(Y\) have joint probability density function:
\(f(x,y)=4xy\)
for \(0<x<1\) and \(0<y<1\). What is \(f_X(x)\), the marginal p.d.f. of \(X\), and \(f_Y(y)\), the marginal p.d.f. of \(Y\)?
Solution
In order to find the marginal p.d.f. of \(X\), we need to integrate the joint p.d.f. \(f(x,y)\) over \(0<y<1\), that is, over the support of \(Y\). Doing so, we get:
\(f_X(x)=\int_0^1 4xy dy=4x\left[\dfrac{y^2}{2}\right]_{y=0}^{y=1}=2x, \qquad 0<x<1\)
In order to find the marginal p.d.f. of \(Y\), we need to integrate the joint p.d.f. \(f(x,y)\) over \(0<x<1\), that is, over the support of \(X\). Doing so, we get:
\(f_Y(y)=\int_0^1 4xy dx=4y\left[\dfrac{x^2}{2}\right]_{x=0}^{x=1}=2y, \qquad 0<y<1\)
Definition. The expected value of a continuous random variable \(X\) can be found from the joint p.d.f of \(X\) and \(Y\) by:
\(E(X)=\int_{\infty}^\infty \int_{\infty}^\infty xf(x,y)dxdy\)
Similarly, the expected value of a continuous random variable \(Y\) can be found from the joint p.d.f of \(X\) and \(Y\) by:
\(E(Y)=\int_{\infty}^\infty \int_{\infty}^\infty yf(x,y)dydx\)
Example (continued)
Let \(X\) and \(Y\) have joint probability density function:
\(f(x,y)=4xy\)
for \(0<x<1\) and \(0<y<1\). What is the expected value of \(X\)? What is the expected value of \(Y\)?
Solution
The expected value of \(X\) is \(\frac{2}{3}\) as is found here:
We'll leave it to you to show, not surprisingly, that the expected value of \(Y\) is also \(\frac{2}{3}\).
Definition. The continuous random variables \(X\) and \(Y\) are independent if and only if the joint p.d.f. of \(X\) and \(Y\) factors into the product of their marginal p.d.f.s, namely:
\(f(x,y)=f_X(x)f_Y(y), \qquad x\in S_1, \qquad y\in S_2\)
Example (continued)
Let \(X\) and \(Y\) have joint probability density function:
\(f(x,y)=4xy\)
for \(0<x<1\) and \(0<y<1\). Are \(X\) and \(Y\) independent?
Solution
The random variables \(X\) and \(Y\) are indeed independent, because:
\(f(x,y)=4xy=f_X(x) f_Y(y)=(2x)(2y)=4xy\)
So, this is an example in which the support is "rectangular" and \(X\) and \(Y\) are independent.
Note that, as is true in the discrete case, if the support \(S\) of \(X\) and \(Y\) is "triangular," then \(X\) and \(Y\) cannot be independent. On the other hand, if the support is "rectangular" (that is, a product space), then \(X\) and \(Y\) may or may not be independent. Let's take a look first at an example in which we have a triangular support, and then at an example in which the support is rectangular, and, unlike the previous example, \(X\) and \(Y\) are dependent.
Example 202
Let \(X\) and \(Y\) have joint probability density function:
\(f(x,y)=x+y\)
for \(0<x<1\) and \(0<y<1\). Are \(X\) and \(Y\) independent?
Solution
Again, in order to show that \(X\) and \(Y\) are independent, we need to be able to show that the joint p.d.f. of \(X\) and \(Y\) factors into the product of the marginal p.d.f.s. The marginal p.d.f. of \(X\) is:
\(f_X(x)=\int_0^1(x+y)dy=\left[xy+\dfrac{y^2}{2}\right]^{y=1}_{y=0}=x+\dfrac{1}{2},\qquad 0<x<1\)
And, the marginal p.d.f. of \(Y\) is:
\(f_Y(y)=\int_0^1(x+y)dx=\left[xy+\dfrac{x^2}{2}\right]^{x=1}_{x=0}=y+\dfrac{1}{2},\qquad 0<y<1\)
Clearly, \(X\) and \(Y\) are dependent, because:
\(f(x,y)=x+y\neq f_X(x) f_Y(y)=\left(x+\dfrac{1}{2}\right) \left(y+\dfrac{1}{2}\right)\)
This is an example in which the support is rectangular:
and \(X\) and \(Y\) are dependent, as we just illustrated. Again, a rectangular support may or may not lead to independent random variables.