Emojify – Create your own emoji with Deep Learning

Free Machine Learning courses with 130+ real-time projects Start Now!!

Deep Learning project for beginners – Taking you closer to your Data Science dream

Emojis or avatars are ways to indicate nonverbal cues. These cues have become an essential part of online chatting, product review, brand emotion, and many more. It also lead to increasing data science research dedicated to emoji-driven storytelling.

With advancements in computer vision and deep learning, it is now possible to detect human emotions from images. In this deep learning project, we will classify human facial expressions to filter and map corresponding emojis or avatars.

About the Dataset

The FER2013 dataset ( facial expression recognition) consists of 48*48 pixel grayscale face images. The images are centered and occupy an equal amount of space. This dataset consist of facial emotions of following categories:

  • 0:angry
  • 1:disgust
  • 2:feat
  • 3:happy
  • 4:sad
  • 5:surprise
  • 6:natural

Download Dataset: Facial Expression Recognition Dataset

Download Project Code

Before proceeding ahead, please download the source code: Emoji Creator Project Source Code

Create your emoji with Deep Learning

create emoji with deep learning

We will build a deep learning model to classify facial expressions from the images. Then we will map the classified emotion to an emoji or an avatar.

Facial Emotion Recognition using CNN

In the below steps will build a convolution neural network architecture and train the model on FER2013 dataset for Emotion recognition from images.

Technology is evolving rapidly!
Stay updated with DataFlair on WhatsApp!!

Download the dataset from the above link. Extract it in the data folder with separate train and test directories.

Make a file train.py and follow the steps:

1. Imports:

import numpy as np
import cv2

from keras.emotion_models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D
from keras.optimizers import Adam
from keras.layers import MaxPooling2D
from keras.preprocessing.image import ImageDataGenerator

2. Initialize the training and validation generators:

train_dir = 'data/train'
val_dir = 'data/test'
train_datagen = ImageDataGenerator(rescale=1./255)
val_datagen = ImageDataGenerator(rescale=1./255)

train_generator = train_datagen.flow_from_directory(
        train_dir,
        target_size=(48,48),
        batch_size=64,
        color_mode="gray_framescale",
        class_mode='categorical')

validation_generator = val_datagen.flow_from_directory(
        val_dir,
        target_size=(48,48),
        batch_size=64,
        color_mode="gray_framescale",
        class_mode='categorical')

3. Build the convolution network architecture:

emotion_model = Sequential()

emotion_model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(48,48,1)))
emotion_model.add(Conv2D(64, kernel_size=(3, 3), activation='relu'))
emotion_model.add(MaxPooling2D(pool_size=(2, 2)))
emotion_model.add(Dropout(0.25))

emotion_model.add(Conv2D(128, kernel_size=(3, 3), activation='relu'))
emotion_model.add(MaxPooling2D(pool_size=(2, 2)))
emotion_model.add(Conv2D(128, kernel_size=(3, 3), activation='relu'))
emotion_model.add(MaxPooling2D(pool_size=(2, 2)))
emotion_model.add(Dropout(0.25))

emotion_model.add(Flatten())
emotion_model.add(Dense(1024, activation='relu'))
emotion_model.add(Dropout(0.5))
emotion_model.add(Dense(7, activation='softmax'))

4. Compile and train the model:

emotion_model.compile(loss='categorical_crossentropy',optimizer=Adam(lr=0.0001, decay=1e-6),metrics=['accuracy'])

emotion_model_info = emotion_model.fit_generator(
        train_generator,
        steps_per_epoch=28709 // 64,
        epochs=50,
        validation_data=validation_generator,
        validation_steps=7178 // 64)

5. Save the model weights:

emotion_model.save_weights('model.h5')

6. Using openCV haarcascade xml detect the bounding boxes of face in the webcam and predict the emotions:

cv2.ocl.setUseOpenCL(False)

emotion_dict = {0: "Angry", 1: "Disgusted", 2: "Fearful", 3: "Happy", 4: "Neutral", 5: "Sad", 6: "Surprised"}

cap = cv2.VideoCapture(0)
while True:
    ret, frame = cap.read()
    if not ret:
        break
    bounding_box = cv2.CascadeClassifier('/home/shivam/.local/lib/python3.6/site-packages/cv2/data/haarcascade_frontalface_default.xml')
    gray_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2gray_frame)
    num_faces = bounding_box.detectMultiScale(gray_frame,scaleFactor=1.3, minNeighbors=5)

    for (x, y, w, h) in num_faces:
        cv2.rectangle(frame, (x, y-50), (x+w, y+h+10), (255, 0, 0), 2)
        roi_gray_frame = gray_frame[y:y + h, x:x + w]
        cropped_img = np.expand_dims(np.expand_dims(cv2.resize(roi_gray_frame, (48, 48)), -1), 0)
        emotion_prediction = emotion_model.predict(cropped_img)
        maxindex = int(np.argmax(emotion_prediction))
        cv2.putText(frame, emotion_dict[maxindex], (x+20, y-60), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2, cv2.LINE_AA)

    cv2.imshow('Video', cv2.resize(frame,(1200,860),interpolation = cv2.INTER_CUBIC))
    if cv2.waitKey(1) & 0xFF == ord('q'):
cap.release()
cv2.destroyAllWindows()
    break

Code for GUI and mapping with emojis

Create a folder named emojis and save the emojis corresponding to each of the seven emotions in the dataset.

Paste the below code in gui.py and run the file.

import tkinter as tk
from tkinter import *
import cv2
from PIL import Image, ImageTk
import os
import numpy as np
import cv2
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D
from keras.optimizers import Adam
from keras.layers import MaxPooling2D
from keras.preprocessing.image import ImageDataGenerator

emotion_model = Sequential()

emotion_model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(48,48,1)))
emotion_model.add(Conv2D(64, kernel_size=(3, 3), activation='relu'))
emotion_model.add(MaxPooling2D(pool_size=(2, 2)))
emotion_model.add(Dropout(0.25))

emotion_model.add(Conv2D(128, kernel_size=(3, 3), activation='relu'))
emotion_model.add(MaxPooling2D(pool_size=(2, 2)))
emotion_model.add(Conv2D(128, kernel_size=(3, 3), activation='relu'))
emotion_model.add(MaxPooling2D(pool_size=(2, 2)))
emotion_model.add(Dropout(0.25))

emotion_model.add(Flatten())
emotion_model.add(Dense(1024, activation='relu'))
emotion_model.add(Dropout(0.5))
emotion_model.add(Dense(7, activation='softmax'))
emotion_model.load_weights('model.h5')

cv2.ocl.setUseOpenCL(False)

emotion_dict = {0: "   Angry   ", 1: "Disgusted", 2: "  Fearful  ", 3: "   Happy   ", 4: "  Neutral  ", 5: "    Sad    ", 6: "Surprised"}


emoji_dist={0:"./emojis/angry.png",2:"./emojis/disgusted.png",2:"./emojis/fearful.png",3:"./emojis/happy.png",4:"./emojis/neutral.png",5:"./emojis/sad.png",6:"./emojis/surpriced.png"}

global last_frame1                                    
last_frame1 = np.zeros((480, 640, 3), dtype=np.uint8)
global cap1
show_text=[0]
def show_vid():      
    cap1 = cv2.VideoCapture(0)                                 
    if not cap1.isOpened():                             
        print("cant open the camera1")
    flag1, frame1 = cap1.read()
    frame1 = cv2.resize(frame1,(600,500))

    bounding_box = cv2.CascadeClassifier('/home/shivam/.local/lib/python3.6/site-packages/cv2/data/haarcascade_frontalface_default.xml')
    gray_frame = cv2.cvtColor(frame1, cv2.COLOR_BGR2GRAY)
    num_faces = bounding_box.detectMultiScale(gray_frame,scaleFactor=1.3, minNeighbors=5)

    for (x, y, w, h) in num_faces:
        cv2.rectangle(frame1, (x, y-50), (x+w, y+h+10), (255, 0, 0), 2)
        roi_gray_frame = gray_frame[y:y + h, x:x + w]
        cropped_img = np.expand_dims(np.expand_dims(cv2.resize(roi_gray_frame, (48, 48)), -1), 0)
        prediction = emotion_model.predict(cropped_img)
        
        maxindex = int(np.argmax(prediction))
        cv2.putText(frame1, emotion_dict[maxindex], (x+20, y-60), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2, cv2.LINE_AA)
        show_text[0]=maxindex
    if flag1 is None:
        print ("Major error!")
    elif flag1:
        global last_frame1
        last_frame1 = frame1.copy()
        pic = cv2.cvtColor(last_frame1, cv2.COLOR_BGR2RGB)     
        img = Image.fromarray(pic)
        imgtk = ImageTk.PhotoImage(image=img)
        lmain.imgtk = imgtk
        lmain.configure(image=imgtk)
        lmain.after(10, show_vid)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        exit()


def show_vid2():
    frame2=cv2.imread(emoji_dist[show_text[0]])
    pic2=cv2.cvtColor(frame2,cv2.COLOR_BGR2RGB)
    img2=Image.fromarray(frame2)
    imgtk2=ImageTk.PhotoImage(image=img2)
    lmain2.imgtk2=imgtk2
    lmain3.configure(text=emotion_dict[show_text[0]],font=('arial',45,'bold'))
    
    lmain2.configure(image=imgtk2)
    lmain2.after(10, show_vid2)

if __name__ == '__main__':
    root=tk.Tk()   
    img = ImageTk.PhotoImage(Image.open("logo.png"))
    heading = Label(root,image=img,bg='black')
    
    heading.pack() 
    heading2=Label(root,text="Photo to Emoji",pady=20, font=('arial',45,'bold'),bg='black',fg='#CDCDCD')                                 
    
    heading2.pack()
    lmain = tk.Label(master=root,padx=50,bd=10)
    lmain2 = tk.Label(master=root,bd=10)

    lmain3=tk.Label(master=root,bd=10,fg="#CDCDCD",bg='black')
    lmain.pack(side=LEFT)
    lmain.place(x=50,y=250)
    lmain3.pack()
    lmain3.place(x=960,y=250)
    lmain2.pack(side=RIGHT)
    lmain2.place(x=900,y=350)
    


    root.title("Photo To Emoji")            
    root.geometry("1400x900+100+10") 
    root['bg']='black'
    exitbutton = Button(root, text='Quit',fg="red",command=root.destroy,font=('arial',25,'bold')).pack(side = BOTTOM)
    show_vid()
    show_vid2()
    root.mainloop()

Summary

In this deep learning project for beginners, we have built a convolution neural network to recognize facial emotions. We have trained our model on the FER2013 dataset. Then we are mapping those emotions with the corresponding emojis or avatars.

Using OpenCV’s haar cascade xml we are getting the bounding box of the faces in the webcam. Then we feed these boxes to the trained model for classification.

DataFlair is committed to provide all the resources to make you a data scientist, which includes detailed tutorials, practicals, use-cases as well as projects with source code.

Your opinion matters
Please write your valuable feedback about DataFlair on Google

courses

DataFlair Team

DataFlair Team specializes in creating clear, actionable content on programming, Java, Python, C++, DSA, AI, ML, data Science, Android, Flutter, MERN, Web Development, and technology. Backed by industry expertise, we make learning easy and career-oriented for beginners and pros alike.

85 Responses

  1. Fadi says:

    Hello sir
    there is no error or problems appear in the terminal after I run the gui code
    but there is not anything shown, the cam didn’t open,
    the out put in the terminal is ‘Process finished with exit code 132 (interrupted by signal 4: SIGILL)’

  2. hemalatha says:

    sir i got error :NameError Traceback (most recent call last)
    in ()
    1 train_dir = ‘data/train’
    2 val_dir = ‘data/test’
    —-> 3 train_datagen = ImageDataGenerator(rescale=1./255)
    4 val_datagen = ImageDataGenerator(rescale=1./255)
    5

    NameError: name ‘ImageDataGenerator’ is not defined
    and im using googlecolab

  3. hemalatha says:

    sir in import section it instruct me to from: tensorflow.keras.models import Sequential.while executing and im using googlecolab

  4. Saurab Kumar says:

    We got stucked at a point can u help us to get the output

  5. Rana Rahul Kumar says:

    Can you explain me gui.py code, I’m having problem with that, it won’t run properly

  6. Sameer Rathod says:

    I found this project and it is amazing
    I tried to implement this but still not able to execute this code I need help to execute this project.

    I am using Google Colab
    Please Help me to execute this project

  7. Mithil Mittal says:

    just remove
    if cv2.waitKey(1) & 0xFF == ord(‘q’):
    exit()
    it worked for me

  8. lulu says:

    Can you please share your code link like github or anything

  9. M. Mohasin Mudassar says:

    Sir kindly tell me where can i take this trained model.

    Thanks!

  10. Nandini says:

    Can you explain me gui.py code, I’m having problem with that, it won’t run properly. Webcam is not switching on only . Could you please tell what may be the problem

  11. Keerthy says:

    When I try to run this training code I got an error of showing “module not found: keras.emotion_model”.
    I already downloaded keras. What I want to do to solve this problem

  12. Frank says:

    Hey there, I have tried to run the code (both train.py and gui.py), but I am getting this feedback of “[Finished in 20.3s with exit code -4]”. Is there anyone getting the same issue here? Thank you.

  13. Harshad says:

    can u explain what are the benefit of this project ?

  14. PAYAS DOSHI says:

    Sir can you please tell me where to save all this .py files and data dir for train and test directories
    as when i am running train.py i am having a error
    keras.emotion_model not found
    although i have downloaded tenserflow and keras

  15. Navin Kumar Shahi says:

    how to resolve model.h5 error. It shows:-
    OSError: Unable to open file (unable to open file: name = ‘model.h5’, errno = 2, error message = ‘No such file or directory’, flags = 0, o_flags = 0)

  16. ans says:

    Same here
    How should I resolve this?

  17. Md. Asaduzzaman says:

    I want to run this code own my pc I need your help can you fixed all setup own my my pc. If you free please help me i need your help. Thank you

  18. Semyon Shamaev says:

    Finally code

    import tkinter as tk
    from tkinter import *
    import cv2
    from PIL import Image, ImageTk
    import os
    import numpy as np
    import cv2
    from keras.models import Sequential
    from keras.layers import Dense, Dropout, Flatten
    from keras.layers import Conv2D
    from tensorflow.keras.optimizers import Adam
    from keras.layers import MaxPooling2D
    from keras.preprocessing.image import ImageDataGenerator
    emotion_model = Sequential()
    emotion_model.add(Conv2D(32, kernel_size=(3, 3), activation=’relu’, input_shape=(48,48,1)))
    emotion_model.add(Conv2D(64, kernel_size=(3, 3), activation=’relu’))
    emotion_model.add(MaxPooling2D(pool_size=(2, 2)))
    emotion_model.add(Dropout(0.25))
    emotion_model.add(Conv2D(128, kernel_size=(3, 3), activation=’relu’))
    emotion_model.add(MaxPooling2D(pool_size=(2, 2)))
    emotion_model.add(Conv2D(128, kernel_size=(3, 3), activation=’relu’))
    emotion_model.add(MaxPooling2D(pool_size=(2, 2)))
    emotion_model.add(Dropout(0.25))
    emotion_model.add(Flatten())
    emotion_model.add(Dense(1024, activation=’relu’))
    emotion_model.add(Dropout(0.5))
    emotion_model.add(Dense(7, activation=’softmax’))
    emotion_model.load_weights(‘model.h5’)
    cv2.ocl.setUseOpenCL(False)
    emotion_dict = {0: ” Angry “, 1: “Disgusted”, 2: ” Fearful “, 3: ” Happy “, 4: ” Neutral “, 5: ” Sad “, 6: “Surprised”}
    emoji_dist={0:”./emojis/angry.png”,2:”./emojis/disgusted.png”,2:”./emojis/fearful.png”,3:”./emojis/happy.png”,4:”./emojis/neutral.png”,5:”./emojis/sad.png”,6:”./emojis/surpriced.png”}
    global last_frame1
    last_frame1 = np.zeros((480, 640, 3), dtype=np.uint8)
    global cap1
    cap1 = cv2.VideoCapture(0)
    show_text=[0]
    def show_vid():
    if not cap1.isOpened():
    print(“cant open the camera1”)
    flag1, frame1 = cap1.read()
    frame1 = cv2.resize(frame1,(600,500))
    bounding_box = cv2.CascadeClassifier(‘./haarcascade_frontalface_default.xml’)
    gray_frame = cv2.cvtColor(frame1, cv2.COLOR_BGR2GRAY)
    num_faces = bounding_box.detectMultiScale(gray_frame,scaleFactor=1.3, minNeighbors=5)
    for (x, y, w, h) in num_faces:
    cv2.rectangle(frame1, (x, y-50), (x+w, y+h+10), (255, 0, 0), 2)
    roi_gray_frame = gray_frame[y:y + h, x:x + w]
    cropped_img = np.expand_dims(np.expand_dims(cv2.resize(roi_gray_frame, (48, 48)), -1), 0)
    prediction = emotion_model.predict(cropped_img)

    maxindex = int(np.argmax(prediction))
    cv2.putText(frame1, emotion_dict[maxindex], (x+20, y-60), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2, cv2.LINE_AA)
    show_text[0]=maxindex
    if flag1 is None:
    print (“Major error!”)
    elif flag1:
    global last_frame1
    last_frame1 = frame1.copy()
    pic = cv2.cvtColor(last_frame1, cv2.COLOR_BGR2RGB)
    img = Image.fromarray(pic)
    imgtk = ImageTk.PhotoImage(image=img)
    lmain.imgtk = imgtk
    lmain.configure(image=imgtk)
    lmain.after(10, show_vid)

    def show_vid2():
    print(‘vid2’)
    frame2=cv2.imread(emoji_dist[show_text[0]])
    pic2=cv2.cvtColor(frame2,cv2.COLOR_BGR2RGB)
    img2=Image.fromarray(frame2)
    imgtk2=ImageTk.PhotoImage(image=img2)
    lmain2.imgtk2=imgtk2
    lmain3.configure(text=emotion_dict[show_text[0]],font=(‘arial’,45,’bold’))

    lmain2.configure(image=imgtk2)
    lmain2.after(10, show_vid2)
    if __name__ == ‘__main__’:
    root=tk.Tk()
    img = ImageTk.PhotoImage(Image.open(“logo.png”))
    heading = Label(root,image=img,bg=’black’)

    heading.pack()
    heading2=Label(root,text=”Photo to Emoji”,pady=20, font=(‘arial’,45,’bold’),bg=’black’,fg=’#CDCDCD’)

    heading2.pack()
    lmain = tk.Label(master=root,padx=50,bd=10)
    lmain2 = tk.Label(master=root,bd=10)
    lmain3=tk.Label(master=root,bd=10,fg=”#CDCDCD”,bg=’black’)
    lmain.pack(side=LEFT)
    lmain.place(x=50,y=250)
    lmain3.pack()
    lmain3.place(x=960,y=250)
    lmain2.pack(side=RIGHT)
    lmain2.place(x=900,y=350)

    root.title(“Photo To Emoji”)
    root.geometry(“1400×900+100+10″)
    root[‘bg’]=’black’
    exitbutton = Button(root, text=’Quit’,fg=”red”,command=root.destroy,font=(‘arial’,25,’bold’)).pack(side = BOTTOM)

    show_vid()
    show_vid2()
    root.mainloop()
    cap1.release()

  19. Sajid says:

    Anyone can help me with this project please contact me .. i will pay you..

  20. Sajid says:

    i have struck at ‘keras.emotion.models” can anyone help me

  21. Sagar says:

    Showing Error Like This:
    Traceback (most recent call last):
    File “gui.py”, line 123, in
    show_vid()
    File “gui.py”, line 60, in show_vid
    num_faces = bounding_box.detectMultiScale(gray_frame,scaleFactor=1.3, minNeighbors=5)
    cv2.error: OpenCV(4.5.5) /io/opencv/modules/objdetect/src/cascadedetect.cpp:1689: error: (-215:Assertion failed) !empty() in function ‘detectMultiScale’

  22. Sagar says:

    2022-05-03 15:16:14.146271: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library ‘libcudart.so.11.0’; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/sagar/.local/lib/python3.8/site-packages/cv2/../../lib64:
    2022-05-03 15:16:14.146315: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
    2022-05-03 15:16:17.213274: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library ‘libcuda.so.1’; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/sagar/.local/lib/python3.8/site-packages/cv2/../../lib64:
    2022-05-03 15:16:17.213322: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)
    2022-05-03 15:16:17.213399: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (sagar-HP-Notebook): /proc/driver/nvidia/version does not exist
    2022-05-03 15:16:17.213716: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
    To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.

  23. Sagar says:

    Displaying nothing but camera flash blinks continuesly

  24. Amrutha says:

    Hey,
    I am trying to run this code on my colab. I wanted to know what image file logo.png is in the gui.py file.
    Kindly anyone can help me with it?

  25. Zi Huang says:

    Can anyone help me resolve this error?

    Traceback (most recent call last):
    File “c:\Users\qsh25\Desktop\CS4824 Machine Learning Project\emoji-creator-project-code\emoji-creator-project-code\gui.py”, line 95, in
    img = ImageTk.PhotoImage(Image.open(“logo.png”))
    File “C:\Python310\lib\site-packages\PIL\Image.py”, line 3092, in open
    fp = builtins.open(filename, “rb”)
    FileNotFoundError: [Errno 2] No such file or directory: ‘logo.png’

  26. om sirsath says:

    bounding_box = cv2.CascadeClassifier(‘/home/shivam/.local/lib/python3.6/site-packages/cv2/data/haarcascade_frontalface_default.xml’)

    the question is where is the xml file?

  27. Chandan kumar says:

    [ERROR:[email protected]] global persistence.cpp:505 open Can’t open file: ‘/home/shivam/.local/lib/python3.6/site-packages/cv2/data/haarcascade_frontalface_default.xml’ in read mode

    error: OpenCV(4.7.0) /io/opencv/modules/highgui/src/window.cpp:1272: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function ‘cvShowImage’

    Anyone please help me, how to resolve this issue and how can i make my own casecade path folder

Leave a Reply

Your email address will not be published. Required fields are marked *