# Distribution Function Technique

You might not have been aware of it at the time, but we have already used the distribution function technique at least twice in this course to find the probability density function of a function of a random variable. For example, we used the distribution function technique to show that:

\(Z=\dfrac{X-\mu}{\sigma}\)

follows a standard normal distribution when *X* is normally distributed with mean *μ* and standard deviation *σ*. And, we used the distribution function technique to show that, when *Z* follows the standard normal distribution:

\(Z^2\)

follows the chi-square distribution with 1 degree of freedom. In summary, we used the **distribution function technique** to find the p.d.f. of the random function *Y *= *U*(*X*) by:

(1) First, finding the cumulative distribution function:

\(F_Y(y)=P(Y\leq y)\)

(2) Then, differentiating the cumulative distribution function *F*(*y*) to get the probability density function *f*(*y*). That is:

\(f_Y(y)=F'_Y(y)\)

Now that we've officially stated the distribution function technique, let's take a look at a few more examples.

### Example

Let *X* be a continuous random variable with the following probability density function:

\(f(x)=3x^2\)

for 0 < *x* < 1. What is the probability density function of \(Y=X^2\)?

**Solution.** If you look at the graph of the function (above and to the right) of *\(Y=X^2\), you might note that (1) the function is an increasing function of X*, and (2) 0 <

*y*< 1. That noted, let's now use the distribution function technique to find the p.d.f. of

*Y*. First, we find the cumulative distribution function of

*Y*:

Having shown that the cumulative distribution function of *Y* is:

\(F_Y(y)=y^{3/2}\)

for 0 < *y* < 1, we now just need to differentiate *F*(*y*) to get the probability density function *f*(*y*). Doing so, we get:

\(f_Y(y)=F'_Y(y)=\dfrac{3}{2} y^{1/2}\)

for 0 < *y* < 1. Our calculation is complete! We have successfully used the distribution function technique to find the p.d.f of *Y*, when *Y* was an increasing function of *X*. (By the way, you might find it reassuring to verify that *f*(*y*) does indeed integrate to 1 over the support of *y*. In general, that's not a bad thing to check.)

One thing you might note in the last example is that great care was used to subscript the cumulative distribution functions and probability density functions with either an *X* or a *Y* to indicate to which random variable the functions belonged. For example, in finding the cumulative distribution function of *Y*, we started with the cumulative distribution function of *Y*, and ended up with a cumulative distribution function of *X*! If we didn't use the subscripts, we would have had a good chance of throwing up our hands and botching the calculation. In short, using subscripts is a good habit to follow!

### Example

Let *X* be a continuous random variable with the following probability density function:

\(f(x)=3(1-x)^2\)

for 0 < *x* < 1. What is the probability density function of \(Y=(1-X)^3\) ?

**Solution.** If you look at the graph of the function (above and to the right) of:

\(Y=(1-X)^3\)

*you might note that (1) the function is a decreasing function of X*, and(2) 0 <

*y*< 1. That noted, let's now use the distribution function technique to find the p.d.f. of

*Y*. First, we find the cumulative distribution function of

*Y*:

Having shown that the cumulative distribution function of *Y* is:

\(F_Y(y)=y\)

for 0 < *y* < 1, we now just need to differentiate *F*(*y*) to get the probability density function *f*(*y*). Doing so, we get:

\(f_Y(y)=F'_Y(y)=1\)

for 0 < *y* < 1. That is, *Y* is a *U*(0,1) random variable. (Again, you might find it reassuring to verify that *f*(*y*) does indeed integrate to 1 over the support of *y*.)