Research on emergence of communication has introduced a language learning paradigm based on goal-driven interaction between neural network agents. While generalization to unseen examples is a particularly desirable property for the communication protocols that the agents develop for solving tasks, it can be better achieved when agents have a grounded understanding of the quantitative information of the referred concepts. In this work, I focus on the generalization capabilities of communication protocols, particularly about numeral systems, which provide specific modes of generalizations: interpolation and extrapolation. I propose and formulate the problem of learning numeral systems in a multi-agent communication setting. I propose and evaluate a neural network model utilizing modulus operations, and analyze how it performs in terms of extrapolation, in comparison to a baseline LSTM based model. I find that although the proposed model achieves minor improvements in extrapolation, it suffers on the expressibility of numerical values.