The Johns Hopkins Center for Bioengineering Innovation & Design (CBID) in the Department of Biomedical Engineering is seeking an Assistant Research Engineer to lead computer vision and AI development for the VectorCam platform. VectorCam is an AI-enabled mobile imaging system designed to allow community health workers to identify mosquito species in real time, enabling faster vector surveillance and improved malaria control strategies. This role will serve as the technical lead for computer vision and image analysis within the project, responsible for designing and iterating on machine learning architectures, managing training pipelines and datasets, and optimizing models for deployment across edge and cloud environments. The successful candidate will work at the intersection of computer vision, edge AI deployment, mobile imaging systems, and global health field implementation. The role requires someone who is highly experimental and curious, constantly exploring new model architectures and approaches while pushing the performance and reliability of the AI system. The ideal candidate will also demonstrate strong attention to detail in data management and data science practices, and be able to clearly articulate the probability, statistics, and evaluation methods used when defending model design choices and performance claims.
Department: Johns Hopkins Center for Bioengineering Innovation & Design (CBID), Department of Biomedical Engineering, Whiting School of Engineering
Location & Duration: Baltimore, MD, USA (in-person job) 40 hours a week
Reports to: Dr. Soumyadipta Acharya (Principal Investigator)
Key Responsibilities
Lead the design, training, and evaluation of computer vision models for mosquito identification and other relevant projects in vector-borne diseases. Develop and maintain a scalable training and evaluation pipeline for image classification and detection models. Continuously explore and evaluate new architectures, training approaches, and optimization strategies to improve model accuracy and robustness. Design and maintain systems for dataset management, ensuring training, validation, and test datasets remain clean, versioned, and traceable. Maintain high standards of data organization and reproducibility across experiments and training pipelines. Develop strategies for deploying models across mobile edge devices and cloud infrastructure. Optimize models for inference on smartphones and other resource-constrained platforms. Work closely with software engineers to integrate models into the Android application and imaging pipeline. Investigate and troubleshoot performance issues related to camera systems, imaging conditions, and device variability. Develop benchmarking and evaluation methods to continuously monitor model performance across deployments. Apply statistical reasoning when evaluating model performance and clearly communicate the statistical basis for model improvements and algorithmic decisions. Collaborate with entomologists and field teams to improve data collection, labeling, and training dataset quality. Contribute to publications and presentations describing algorithm development and system performance.
Technical Focus Areas Computer Vision and Model Development: Design and train deep learning models for insect classification and morphological recognition. Experiment with architectures such as EfficientNet, YOLO, Vision Transformers, and other modern computer vision models to determine optimal approaches for the application. Develop strategies for handling limited datasets, noisy data, and challenging real-world image conditions.
Model Optimization for Edge Deployment: Optimize models for deployment on smartphones using frameworks such as TensorFlow Lite, PyTorch Mobile, or ONNX. Investigate quantization, pruning, and other model optimization techniques to ensure efficient inference on resource-constrained devices. Ensure models perform consistently across different smartphone cameras and hardware configurations.
AI Data Pipeline and Dataset Management: Develop systems for dataset versioning, experiment tracking, and model reproducibility. Ensure that training, validation, and testing datasets are well organized, auditable, and traceable. Maintain clear documentation of dataset lineage and experiment configurations. Build workflows that support continuous model retraining as new field data becomes available.
System Architecture for AI Deployment: Design the architecture for managing model updates, versioning, and deployment across edge devices and cloud platforms. Develop strategies for monitoring model performance and maintaining reliability across large-scale field deployments.
Project Impact VectorCam aims to transform how mosquito surveillance is conducted in malaria-endemic regions by enabling rapid and accurate species identification directly in the field. By improving the speed and accessibility of entomological surveillance, this technology has the potential to strengthen malaria control programs and support more targeted vector control interventions. This role offers the opportunity to work on a globally impactful technology while solving challenging problems at the intersection of computer vision, edge AI, and public health innovation.
Qualifications Master's degree in Computer Science, Machine Learning, Computer Vision, Software Engineering, or a related field. Strong background in computer vision and deep learning. Experience training and evaluating computer vision models using frameworks such as PyTorch or TensorFlow. Strong understanding of probability, statistics, and model evaluation methods, with the ability to clearly explain the reasoning behind model choices and performance metrics. Experience working with image datasets, data pipelines, and model evaluation methodologies. Experience deploying machine learning models to edge devices or mobile platforms. Strong programming skills in Python and experience with machine learning development environments. Strong attention to detail in data management, experiment tracking, and dataset organization. Ability to independently explore technical approaches and rapidly prototype solutions. Interest in applying AI systems to real-world global health challenges.
Preferred Experience Experience with model deployment on Android devices or mobile platforms. Experience with experiment tracking tools such as Weights & Biases, OpenCV, HuggingFace, Google's ML Kit or similar systems. Experience working with image datasets collected in real-world environments. Experience with edge AI optimization techniques such as quantization or pruning. Experience contributing to applied machine learning research or technical publications.
Tagged as: Life Sciences
Lead R&D Software Engineer, Software/Hardware Integration Lead development and integration of sonar software applications with tactical hardware components to support...
ApplyProduct Engineer I – II Or Sr. Product Engineer, Texas Institute For Electronics The Texas Institute for Electronics (TIE) is...
ApplyEquipment Engineer, Plasma Etching and Ion Milling, Microelectronics Research Center The Microelectronics Research Center (MRC) laboratories serve users from both...
ApplyMaritime Surveillance Signal Processing And Automation Engineer (ESA) Developing novel signal processing and automation technology solutions for Maritime Surveillance SONAR...
ApplyPrincipal Deep Learning/AI Engineer Bioinformatics What if the work you did every day could impact the lives of people you...
ApplyLead Data Engineer The Lead Data Engineer for the UT Data Hub improves university outcomes and advances the UT mission...
ApplyPlease visit facultyjobs.jhu.edu.
Don't forget to mention that you found the position on jobRxiv!
