Junlin (Hans) Han
FAIR, Meta
Torr Vision Group , University of Oxford

Location: Meta Kings Cross, London N1C 4DB
Profile | News | Research Interests | Education | Experience | Publications | Services | Awards| Miscs
Email: junlinhcv@gmail.com; junlinhan@meta.com; junlin.han@eng.ox.ac.uk
[GitHub] [Curriculum Vitae] [Google Scholar] [Twitter/X]

Profile

Greetings, welcome to my website!

News

show more

Research Interests

My research focuses on multimodal foundation models, including multimodal large language models and generative models, as well as their interactions and integration. You can click below to view some representative papers in each topic.

I approach them from a vision-first perspective, aiming to make vision a central component of general intelligence.

(1) Multimodal foundation models
Selected projects: Visual priors in LLM pre-training, Llama4

(2) Generative (usually 3D/video) models
Selected projects: Scaling-up 3D gen, High-quality 3D asset gen, Scaling up neural rendeing

(3) Integration and interactions
Selected projects: Unfied pre-training, LLM for 3D world modeling

Education

Postgraduate (01.2024 - ongoing) Undergraduate (02.2019 - 07.2023)

Research Experience

PhD Student Researcher at Meta (10.2023 – ongoing) Lead Research Scientist, Cybever (05.2023 – 09.2023) Visiting Researcher at AIML, University of Adelaide (12.2021 – 05.2023) Research Student at Data61-CSIRO & Australian National University (08.2020 – 05.2023)

Publications

Recent representative papers are highlighted.

Academic Services

Awards

Miscs