Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It says they used Vicuna


the paper says llama:

> We develop a large multimodal model (LMM), by connecting the open-set visual encoder of CLIP [36] with the language decoder LLaMA, and fine-tuning them end-to-end on our generated instructional vision-language data




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: