Check out our new termcard for Michealmas 2025!
This project set out to tackle a powerful question: What if interviewers couldn’t see or hear any personal characteristics of a candidate — only their words?
The team designed a blind interview plug-in that replaces candidates’ faces and voices with a single, neutral model during video calls. By removing cues such as gender, ethnicity, and accent, the system aimed to reduce unconscious bias and create a fairer, more inclusive interview process.
Develop a video call plug-in to neutralise personal characteristics in real time.
Explore how technology can reduce unconscious bias in recruitment and assessment.
Test the system in collaboration with the MPLS EDI Committee.
The team built the system using:
Open Broadcaster Software (OBS) for video streaming.
Face- and voice-changing plugins, integrated with custom-trained models.
Generative Adversarial Networks (GANs) such as StyleGAN, fine-tuned through latent space editing.
Gender and race classifiers to ensure balanced outputs.
After curating a diverse dataset, the models were trained and tested, with attention given to latency, processing power, and compatibility. A pilot study then put the prototype into practice, providing valuable real-world feedback.
Delivered a working prototype of a blind interview plug-in.
Demonstrated how AI and software can be applied to promote fairness in recruitment.
Sparked meaningful discussion on the role of engineers in designing systems for social responsibility.