A new breakthrough in vision language models (VLMs) has emerged with LLaVA-o1, developed by researchers from multiple Chinese universities.
This open-source model, inspired by OpenAI’s o1, aims to address the challenges of structured and systematic reasoning in VLMs.
The key problem with early VLMs was their inability to reason logically through complex tasks.
These models often jumped to conclusions without proper reasoning steps, leading to errors. LLaVA-o1 addresses this by breaking the reasoning process into four distinct stages: Summary, Caption, Reasoning, and Conclusion.
While only the final conclusion is visible to the user, the other stages form the internal reasoning trace, helping the model systematically work through problems.
LLaVA-o1 introduces a novel technique called stage-level beam search, which generates multiple candidate outputs at each stage of reasoning and selects the best one. This method contrasts with traditional inference-time scaling approaches, improving accuracy and efficiency.
During its training, LLaVA-o1 was fine-tuned on a dataset of around 100,000 image-question-answer pairs, annotated using GPT-4o.
Despite the limited data, LLaVA-o1 outperformed both open and closed models, including GPT-4-o-mini and Gemini 1.5 Pro, showing a significant performance boost of 6.9% in benchmark tests.
The model represents a new standard for multimodal reasoning in VLMs, with its structured approach and efficient inference-time scaling paving the way for further improvements in complex reasoning tasks.