It is desired to test H0: μ = 55 against H1: μ < 55 using α = 0.10. The population in question is normally distributed with a standard deviation of 20. A random sample of 64 will be drawn from this population. If μ is really equal to 50, what is the probability that the hypothesis test would lead the investigator to commit a Type II error?