
Acknowledgements: Insights on vLLM Kernel from UC Berkeley
10 Jul 2025
We extend gratitude to Zhuohan Li, Simon Mo, and Kaichao You from UC Berkeley for their valuable insights that contributed to the vLLM kernel discussion

The Research Team Behind the phi-3 LLM Development
10 Jul 2025
Discover the names of the talented researchers and scientists who collaborated on the development and study of the phi-3 large language models.

Diverse Question Types in LLM Benchmark Prompts
10 Jul 2025
Explore a sample prompt featuring varied multiple-choice questions covering math, science, and humanities

References on Responsible AI, Long-Context, and Data-Optimal LLMs
9 Jul 2025
A compilation of cited works supporting our phi-3 research, including studies on responsible AI alignment, long-context models, data-optimal training strategies

Confronting Multimodal LLM Challenges: Reasoning Gaps and Safety Trade-offs in Phi-3-Vision
9 Jul 2025
Explore the inherent challenges in Phi-3-Vision, from limitations in high-level reasoning and occasional ungrounded outputs to the complex trade-offs

Benchmarking Multimodal Safety: Phi-3-Vision's Robust RAI Performance
9 Jul 2025
Explore Phi-3-Vision's robust safety evaluation on internal and public Multi-Modal RAI benchmarks (RTVLM, VLGuard)

Phi-3-Vision's Triumphant Performance on Key Multimodal Benchmarks
8 Jul 2025
Witness Phi-3-Vision's impressive evaluation results across nine open-source academic benchmarks, challenging top multimodal LLMs like MM1, Llava, and Claude 3.

Unveiling phi-3-vision: Architecture, Pre-training, and Post-training for Visual AI
8 Jul 2025
Explore the technical specifications of phi-3-vision, detailing its CLIP + phi-3-mini-128K architecture, and diverse multimodal pre-training dataset

Navigating LLM Frontiers: phi-3's Weaknesses and Augmentation Pathways
8 Jul 2025
Explore the inherent challenges in even high-performing small LLMs like phi-3-mini, such as factual limitations and language restrictions