VLM Engineer
Date: 23 Mar 2026
Location: AE
Company: Technology Innovation Institute
Key Responsibilities:
- Vision Model Ablation Studies: Conduct comprehensive ablation studies on vision models to assess the impact of various components and configurations. Collaborate with researchers to analyze and report on the effectiveness of different model architectures and settings.
- Data Ablation Research: Partner with team members to perform data ablation studies, identifying optimal data types and structures for training vision-language models. Also, analyze the impact of different data inputs on model performance, particularly focusing on vision-language alignment.
- Model Evaluation: Develop and implement robust evaluation protocols for vision language models. Assess model performance across diverse benchmarks and real-world scenarios.
- Model Training and Optimization: Engage in model training, with an emphasis on integrating LLMs with vision models like CLIP. Technical
Skills Required:
- Expertise in machine learning, particularly in vision-language models and LLMs.
- Strong understanding of model architectures like CLIP and their application in vision language tasks.
- Proficiency in distributed training techniques and multi-GPU optimization.
- Experience with deep learning frameworks (e.g., PyTorch).
- Strong analytical skills for conducting ablation studies and evaluating model performance.
- Familiarity with dataset curation and processing for vision and language tasks.
Qualifications:
- PhD degree in deep learning.
- Proven track record of research and development in vision-language models. •
- Publication record in top-tier conferences is highly desirable