Recent language models have shown impressive multilingual performance,
even when not explicitly trained for it. Despite this, there are concerns
about the quality of their outputs across different languages. In this paper, we show how disparity in the treatment of different languages arises
at the tokenization stage, well before a model is even invoked. The same
text translated into different languages can have drastically different tokenization lengths, with differences up to 15 times in some cases. These
disparities persist even for tokenizers that are intentionally trained for multilingual support. Character-level and byte-level models also exhibit over 4
times the difference in the encoding length for some language pairs. This
induces unfair treatment for some language communities in regard to the
cost of accessing commercial language services, the processing time and latency, as well as the amount of content that can be provided as context
to the models. Therefore, we make the case that we should train future
language models using multilingually fair subword tokenizers.