The Neural Gas (NG) is a Vector Quantization technique where a set of prototypes self organize to represent the topology structure of the data, The learning algorithm of the Neural Gas consists in the estimation of the prototypes location in the feature space based in the stochastic gradient descent of an Energy function. In this paper we show that when deviations from idealized distribution function assumptions occur, the behavior of the Neural Gas model can be drastically affected and will not preserve the topology of the feature space as desired. In particular, we show that the learning algorithm of the NG is sensitive to the presence of outliers due to their influence over the adaptation step. We incorporate a robust strategy to the learning algorithm based on M-estimators where the influence of outlying observations are bounded. Finally we make a comparative study of several estimators where we show the superior performance of our proposed method over the original NG, in static data clustering tasks on both synthetic and real data sets.