Meituan has announced the release and full open-sourcing of its native multimodal large model, LongCat-Next, along with its core component, the Discrete Native Resolution Visual Tokenizer (dNaViT). This model breaks away from the traditional "language-centric" patched architecture common in current large models by uniformly mapping images, audio, and text into homologous discrete tokens. Utilizing a pure "Next Token Prediction" (NTP) paradigm, LongCat-Next enables vision and speech to become AI's "native language."
According to the announcement, LongCat-Next achieves three key technological breakthroughs: first, the Discrete Native Autoregressive (DiNA) architecture completely eliminates modality barriers; second, the Discrete Native Resolution Visual Tokenizer (dNaViT) constructs a "dictionary" for the visual world; and third, a semantically aligned complete encoder resolves the industry-wide challenge that "discretization inevitably leads to information loss."