A common approach to handle categorical attributes in neural networks is to use the one-hot encoding. Depending of the number of categorical attributes, and the number of levels that each one of these can have, will increase significantly the number of input nodes of a neural network. In this paper we analyze a type of randomized neural network called random vector functional link which has a non-iterative learning approach, in particular, it can be trained in one step. In order to avoid any time consuming preprocessing for categorical attributes, or increase the dimensionality of the input layer of the neural network, we propose a two-stage learning approach. In the first stage, we use the naive Bayes, that is also trained in one step, to compute the posterior probabilities for each class, using all the attributes. Then, in the second stage, we train a random vector functional link using as inputs the continuous attributes and including as additional hidden units the posterior probabilities obtained in the previous stage. We compare using benchmark datasets, the classification performance of the neural network with only continuous attributes versus the combination with naive Bayes to include the information of the categorical data. The experimental results show the effectiveness of the proposed approach for mixed data handling in random vector functional link networks.