Light Side vs. Dark Side ML

nvisia is an award-winning software development partner driving competitive edge for clients.

As part of the nvisionaries Science Fair, senior technical architect Joshua Armstrong applied machine learning to an age-old problem: predicting a person’s allegiance to the light side or the dark side of the Force. Using machine learning, this project determines whether the user belongs among the Jedi or the Sith based on an image, then transforms that image to reflect their light or dark status.

“I had a binary image classifier demo, and I wanted a snappier way to display the results. Just telling someone that they line up with the Dark Side or the Light Side wasn't really cool enough for me.” -Joshua Armstrong, Senior Technical Architect (Milwaukee Region)

Joshua started with a notebook that he had previously created to help people learn how image classifiers work. He then used that starting point to build out an entire interactive exhibit. At his booth, attendees were able to take their pictures, see what percentage of Light Side vs Dark Side the image classifier determined for them, and see their images transformed into classic Star Wars characters of the same classification – all in real time.

darkside

Many technologies were used to enable this classification and transformation. The background was all running on Amazon ECS, including rembg scripts to remove backgrounds of the images taken, a TensorFlow classifier kernel to assign a “Sith-ness” value to each image, OpenCV with dlib to swap the user’s face onto a character portrait, and a Flask-based Python app to coordinate movement of the data between components. Joshua also learned Svelte to create the user-facing application running in Safari.

The heart of the project is a TensorFlow-based convolutional binary image classifier. It consists of 16 Keras layers, not counting data augmentation and preprocessing. Five convolutional layers and two dense layers do the real work, implementing the actual machine learning aspects of the project. It was trained for 12 epochs on an augmented training dataset of around 5000 images of Sith and Jedi characters. To avoid adding bias to the classifications due to the background of the user’s image, the background is removed before classification.

Joshua had the most fun creating the face-swap component due to being able to explore OpenCV and dlib. This component selects a portrait of a Star Wars character that matches the user’s “Sith-ness” score and then uses dlib to swap the user’s face onto the portrait. In this component, Joshua used Voronoi regions as inputs to dlib for Delaunay triangulation in order to feature map a three-dimensional surface from the two-dimensional images.

MicrosoftTeams-image (14)-1

All of this came together to allow the users to see their Light Side / Dark Side score and view themselves transformed into a Jedi or Sith character via the user-facing application. For his part, Joshua got to learn several technologies and further explore his interests in machine learning, not to mention getting to see the impact of his experiment on the attendees who stopped by his booth.

 “My favorite thing was seeing people’s faces when they saw their own merged into a Star Wars character.” -Joshua Armstrong, Senior Technical Architect (Milwaukee Region)

Meet the nvisionary

Name: Joshua Armstrong
Title: Senior Technical Architect
Track: Technical
Specialty: Full stack development, machine learning, data science, UNIX

josharmstrong

 


Are you a creative technologist with a passion for experimenting with emerging technology? Learn more about joining our team and becoming an nvisionary!

Related Articles