Large deviations theory is a well-studied area which has shown to have numerous applications. Broadly speaking, the theory deals with analytical approximations of probabilities of certain types of rare events. Moreover, the theory has recently proven instrumental in the study of complexity of methods that solve stochastic optimization problems by replacing expectations with sample averages (such an approach is called sample average approximation in the literature). The typical results, however, assume that the underlying random variables are either i. i. d. or exhibit some form of Markovian dependence. Our interest in this paper is to study the application of large deviations results in the context of estimators built with Latin Hypercube sampling, a well-known sampling technique for variance reduction. We show that a large deviation principle holds for Latin Hypercube sampling for functions in one dimension and for separable multi-dimensional functions. Moreover, the upper bound of the probability of a large deviation in these cases is no higher under Latin Hypercube sampling than it is under Monte Carlo sampling. We extend the latter property to functions that are monotone in each argument. Numerical experiments illustrate the theoretical results presented in the paper.