We consider a network with finite number of users where each user observes only a numerical value of its measurement. The system is interactive in the sense that each user's payoff is affected by the environment state and the choices of all the other users. This scenario can be modeled as dynamic robust game. We examine how risk-sensitive learners influence the convergence time of such a game in a specific network selection problem. Based on imitative combined fully distributed payoff and strategy learning (CODIPAS), we provide a simple class of network selection games in which a convergence to global optimum can be obtained with a very fast convergence rate. We show that the risk-sensitive index can be used to improve the convergence time in a wide range of parameters.