Simple AI Face and emotion Recognition With React

Muftaudeen Jimoh
Dev Genius
Published in
5 min readApr 6, 2022

--

Do you like AI?, because I sure do. Today you are going to be using Artificial Intelligence and face recognition to determine your emotions through your webcam real-time.

In order to accomplish this face recognition, you’ll be using a library called face-api.js, a wrapper around tensor-flow which is one of the most popular machine-learning libraries present (as of the time of publishing this article). It’s really easy to get it setup and running in the browser.

This is the link to the GitHub repository of this article.

Please leave a star ✨ while you are at my repository.

Let’s Begin

Bootstrap a new react app say ai-face-detection

npx create-react-app ai-face-detection

Download the face-api.js library in your react app by running

//using npm
npm i face-api.js
//using yarn
yarn add face-api.js

and all the models you’ll need for this project is made available in my repository .

place the models folder you got from my repository under the public folder in your react app. Your app structure should be like this:

AI-FACE-DETECTION
|- public
|- models
|- src
|- App.css
|- App.js
|- index.css
|- index.js

Run npm start to start your application.

Let’s add some styles to our index.css:

body {
margin: 0;
padding: 0;
width: 100vw;
height: 100vh;
display: flex;
justify-content: center;
align-items: center;
}

Our App.js file is going to be our main focus in the entirety of this article. Now in the App.js , we are going to add a title and video element.

App.js

import { useEffect, useRef } from 'react';const App = () => {
const videoRef = useRef();
return (
<div>
<video crossOrigin='anonymous' ref={videoRef} autoPlay>
</video>
</div>
)
};
export default App;

I added a useRef hook to keep track of the video stream. We need to access the webcam and output the video, to do that we’ll create a function startVideo which will contain the syntax to access the webcam and store the stream in videoRef ,

App.js

useEffect(() => {
startVideo();
}, []);
const startVideo = () => {
navigator.mediaDevices.getUserMedia({ video: true })
.then((currentStream) => {
videoRef.current.srcObject = currentStream;
}).catch((err) => {
console.error(err)
});
}

The useEffectHook runs once the the app is mounted and initializes the video stream by calling the startVideo function. By now if you save , you should be able to see your webcam output in your browser.

Now let’s begin using our face-api.js library.

Import the face-api.js library into your index.js and load the models from your models folder

App.js

useEffect(() => {
startVideo();
videoRef && loadModels();
}, []);
const loadModels = () => {
Promise.all([
faceapi.nets.tinyFaceDetector.loadFromUri('/models'),
faceapi.nets.faceLandmark68Net.loadFromUri('/models'),
faceapi.nets.faceRecognitionNet.loadFromUri('/models'),
faceapi.nets.faceExpressionNet.loadFromUri('/models'),
]).then(() => {
faceDetection();
})
};
const faceDetection = () => {
const detections = await faceapi.detectAllFaces
(videoRef.current, new faceapi.TinyFaceDetectorOptions())
.withFaceLandmarks()
.withFaceExpressions();
console.log(detections);
}

Okay let me explain,

  • In the snippet added to the useEffect hook,

videoRef && loadModels();

We call loadModels function only when the webcam has been loaded

  • The loadModels function loads all the the required models from our models folders using a Promise.all (used to run multiple promises and return one promise with an array of the results of the input promises).
  • After Loading the models, it call the faceDetection function which detects any face placed in the webcam view and prints out the value of its detections to the console.

By now if you put your face in the webcam view, you should see a log in the browser console containing information about the detected face.

Note: for proper detection, you should be in a well lit environment.

To show the detection on the video output, we’ll add a canvas element on to the App.js, to track the position of the detected face and create a canvasRef to keep track of the canvas.

App.js

const canvasRef = useRef();return (
<div className="app">
<h1> AI FACE DETECTION</h1>
<div className='app__video'>
<video crossOrigin='anonymous' ref={videoRef} autoPlay >
</video>
</div>
<canvas ref={canvasRef} width="940" height="650"
className='app__canvas' />
</div>
);

For the style the position, we’ll edit our App.css

App.css

.app {
display: flex;
width: 100vw;
height: 100vh;
flex-direction: column;
align-items: center;
justify-content: space-between;
}
.app__video {
display: flex;
align-items: center;
}
.app__canvas {
position: absolute;
top: 100px;
}

Now to transmit the data from the detection onto the canvas, we’ll edit our faceDetection function into :

App.js

const faceDetection = async () => {
setInterval(async() => {
const detections = await faceapi.detectAllFaces
(videoRef.current,new faceapi.TinyFaceDetectorOptions())
.withFaceLandmarks()
.withFaceExpressions();
canvasRef.current.innerHtml = faceapi.createCanvasFromMedia
(videoRef.current);
faceapi.matchDimensions(canvasRef.current, {
width: 940,
height: 650,
})
const resized = faceapi.resizeResults(detections, {
width: 940,
height: 650,
});
// to draw the detection onto the detected face i.e the box
faceapi.draw.drawDetections(canvasRef.current, resized);
//to draw the the points onto the detected face
faceapi.draw.drawFaceLandmarks(canvasRef.current, resized);
//to analyze and output the current expression by the detected face
faceapi.draw.drawFaceExpressions(canvasRef.current, resized);
}, 1000);};

And with that we have completed the Face and expression recognition app, You can modify the position of the canvas in the App.css file.

So our complete App.js file should look like:

import { useRef, useEffect } from 'react';
import './App.css';
import * as faceapi from "face-api.js";
function App() {
const videoRef = useRef();
const canvasRef = useRef();

useEffect(() => {
startVideo();
videoRef && loadModels();
}, []);
const loadModels = () => {
Promise.all([
faceapi.nets.tinyFaceDetector.loadFromUri('/models'),
faceapi.nets.faceLandmark68Net.loadFromUri('/models'),
faceapi.nets.faceRecognitionNet.loadFromUri('/models'),
faceapi.nets.faceExpressionNet.loadFromUri('/models'),
]).then(() => {
faceDetection();
})
};
const startVideo = () => {
navigator.mediaDevices.getUserMedia({ video: true })
.then((currentStream) => {
videoRef.current.srcObject = currentStream;
}).catch((err) => {
console.error(err)
});
}
const faceDetection = async () => {
setInterval(async() => {
const detections = await faceapi.detectAllFaces
(videoRef.current, new faceapi.TinyFaceDetectorOptions())
.withFaceLandmarks().withFaceExpressions();
canvasRef.current.innerHtml = faceapi.
createCanvasFromMedia(videoRef.current);
faceapi.matchDimensions(canvasRef.current, {
width: 940,
height: 650,
})
const resized = faceapi.resizeResults(detections, {
width: 940,
height: 650,
});
// to draw the detection onto the detected face i.e the box
faceapi.draw.drawDetections(canvasRef.current, resized);
//to draw the the points onto the detected face
faceapi.draw.drawFaceLandmarks(canvasRef.current, resized);
//to analyze and output the current expression by the detected face
faceapi.draw.drawFaceExpressions(canvasRef.current, resized);
}, 1000)}return (
<div className="app">
<h1> AI FACE DETECTION</h1>
<div className='app__video'>
<video crossOrigin='anonymous' ref={videoRef} autoPlay
</video>
</div>
<canvas ref={canvasRef} width="940" height="650"
className='app__canvas' />
</div>
);
}
export default App;

Conclusion

In this article we saw how easy it is to setup a face recognition application using the face-api.js library. You can choose to implement further features by reading their documentation.

Thanks for reading and don’t forget to clap👏 and leave a star ✨ at my repository.

--

--

Full stack Javascript web developer | AI and Machine Learning enthusiast | Front-end web developer | Node Backend developer |