Detecting Fake News with Python and Machine Learning

Free Machine Learning courses with 130+ real-time projects Start Now!!

Do you trust all the news you hear from social media?

All news are not real, right?

How will you detect fake news?

The answer is Python. By practicing this advanced python project of detecting fake news, you will easily make a difference between real and fake news.

Before moving ahead in this machine learning project, get aware of the terms related to it like fake news, tfidfvectorizer, PassiveAggressive Classifier.

Also, I like to add that DataFlair has published a series of machine learning Projects where you will get interesting and open-source advanced ml projects. Do check, and then share your experience through comments. Here is the list of top Python projects:

  1. Fake News Detection Python Project
  2. Parkinson’s Disease Detection Python Project
  3. Color Detection Python Project
  4. Speech Emotion Recognition Python Project 
  5. Breast Cancer Classification Python Project
  6. Age and Gender Detection Python Project 
  7. Handwritten Digit Recognition Python Project
  8. Chatbot Python Project
  9. Driver Drowsiness Detection Python Project
  10. Traffic Signs Recognition Python Project
  11. Image Caption Generator Python Project

What is Fake News?

A type of yellow journalism, fake news encapsulates pieces of news that may be hoaxes and is generally spread through social media and other online media. This is often done to further or impose certain ideas and is often achieved with political agendas. Such news items may contain false and/or exaggerated claims, and may end up being viralized by algorithms, and users may end up in a filter bubble.

What is a TfidfVectorizer?

TF (Term Frequency): The number of times a word appears in a document is its Term Frequency. A higher value means a term appears more often than others, and so, the document is a good match when the term is part of the search terms.

IDF (Inverse Document Frequency): Words that occur many times a document, but also occur many times in many others, may be irrelevant. IDF is a measure of how significant a term is in the entire corpus.

The TfidfVectorizer converts a collection of raw documents into a matrix of TF-IDF features.

What is a PassiveAggressiveClassifier?

Passive Aggressive algorithms are online learning algorithms. Such an algorithm remains passive for a correct classification outcome, and turns aggressive in the event of a miscalculation, updating and adjusting. Unlike most other algorithms, it does not converge. Its purpose is to make updates that correct the loss, causing very little change in the norm of the weight vector.

Detecting Fake News with Python

To build a model to accurately classify a piece of news as REAL or FAKE.

About Detecting Fake News with Python

This advanced python project of detecting fake news deals with fake and real news. Using sklearn, we build a TfidfVectorizer on our dataset. Then, we initialize a PassiveAggressive Classifier and fit the model. In the end, the accuracy score and the confusion matrix tell us how well our model fares.

The fake news Dataset

The dataset we’ll use for this python project- we’ll call it news.csv. This dataset has a shape of 7796×4. The first column identifies the news, the second and third are the title and text, and the fourth column has labels denoting whether the news is REAL or FAKE. The dataset takes up 29.2MB of space and you can download it here.

Project Prerequisites

You’ll need to install the following libraries with pip:

pip install numpy pandas sklearn

You’ll need to install Jupyter Lab to run your code. Get to your command prompt and run the following command:

C:\Users\DataFlair>jupyter lab

You’ll see a new browser window open up; create a new console and use it to run your code. To run multiple lines of code at once, press Shift+Enter.

Steps for detecting fake news with Python

Follow the below steps for detecting fake news and complete your first advanced Python Project –

  1. Make necessary imports:
import numpy as np
import pandas as pd
import itertools
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import PassiveAggressiveClassifier
from sklearn.metrics import accuracy_score, confusion_matrix

Screenshot:

importing data sets in python open source projects

2. Now, let’s read the data into a DataFrame, and get the shape of the data and the first 5 records.

#Read the data
df=pd.read_csv('D:\\DataFlair\\news.csv')

#Get shape and head
df.shape
df.head()

Output Screenshot:

interesting python projects - read data frame

3. And get the labels from the DataFrame.

#DataFlair - Get the labels
labels=df.label
labels.head()

Output Screenshot:

Python projects examples - get labels

4. Split the dataset into training and testing sets.

#DataFlair - Split the dataset
x_train,x_test,y_train,y_test=train_test_split(df['text'], labels, test_size=0.2, random_state=7)

Screenshot:

Python data science projects - split data sets

5. Let’s initialize a TfidfVectorizer with stop words from the English language and a maximum document frequency of 0.7 (terms with a higher document frequency will be discarded). Stop words are the most common words in a language that are to be filtered out before processing the natural language data. And a TfidfVectorizer turns a collection of raw documents into a matrix of TF-IDF features.

Now, fit and transform the vectorizer on the train set, and transform the vectorizer on the test set.

#DataFlair - Initialize a TfidfVectorizer
tfidf_vectorizer=TfidfVectorizer(stop_words='english', max_df=0.7)

#DataFlair - Fit and transform train set, transform test set
tfidf_train=tfidf_vectorizer.fit_transform(x_train) 
tfidf_test=tfidf_vectorizer.transform(x_test)

Screenshot:

python data science projects

6. Next, we’ll initialize a PassiveAggressiveClassifier. This is. We’ll fit this on tfidf_train and y_train.

Then, we’ll predict on the test set from the TfidfVectorizer and calculate the accuracy with accuracy_score() from sklearn.metrics.

#DataFlair - Initialize a PassiveAggressiveClassifier
pac=PassiveAggressiveClassifier(max_iter=50)
pac.fit(tfidf_train,y_train)

#DataFlair - Predict on the test set and calculate accuracy
y_pred=pac.predict(tfidf_test)
score=accuracy_score(y_test,y_pred)
print(f'Accuracy: {round(score*100,2)}%')

Output Screenshot:

python machine learning projects

7. We got an accuracy of 92.82% with this model. Finally, let’s print out a confusion matrix to gain insight into the number of false and true negatives and positives.

#DataFlair - Build confusion matrix
confusion_matrix(y_test,y_pred, labels=['FAKE','REAL'])

Output Screenshot:

python projects - confusion matrix

So with this model, we have 589 true positives, 587 true negatives, 42 false positives, and 49 false negatives.

Summary

Today, we learned to detect fake news with Python. We took a political dataset, implemented a TfidfVectorizer, initialized a PassiveAggressiveClassifier, and fit our model. We ended up obtaining an accuracy of 92.82% in magnitude.

Hope you enjoyed the fake news detection python project. Keep visiting DataFlair for more interesting python, data science, and machine learning projects.

Did you know we work 24x7 to provide you best tutorials
Please encourage us - write a review on Google

follow dataflair on YouTube

185 Responses

  1. vanhelsing says:

    in splitting of data set i am getting error as
    value error : found input variable with inconsistent number of samples: [6335 , 5]

  2. Krishna says:

    hey, could someone tell what is the purpose of first column in the data set? what does it represent?

  3. surya says:

    getting unicode error while running the code in jupyter notebook.please tell me how to resolve.

    • DataFlair Team says:

      The unicode issue happens because \u is the default escape code for unicode and can be fixed either by using double slashes ( and no \u anymore) or you can convert to raw string

      I recommend you to change the slashes to double backslash (\\) in “df”.

  4. Tarang says:

    its because in ‘.csv’ file each column is separated by a comma(i.e. delimiter is comma) but a news/article may also contain commas. to remove this error you should use a ‘.tsv’ file (i.e. delimiter is tab).

  5. Akshata Pratap Khomane says:

    File “”, line 2
    df=pd.read_csv(‘C:\Users\administratt\Downloads\news.csv’)
    ^
    SyntaxError: (unicode error) ‘unicodeescape’ codec can’t decode bytes in position 2-3: truncated \UXXXXXXXX escape

    im getting this error ,pls tell me what is solution for this.

    • Eddah Dena says:

      change all the \ to / and you are good to go

    • Yohannes Tilahun says:

      You can use this df=pd.read_csv(r”C:\Users\Habesha Computers\Documents\posts.csv”) code

    • DataFlair Team says:

      The issue happens because \u is the default escape code for Unicode, it can be fixed either by using double slashes (and no \u anymore) or you can convert to raw string

      I recommend you to change the slashes to double backslash (\\) in “df”.

  6. Lakshya Kwatra says:

    write it as
    r’C:\Users\administratt\Downloads\news.csv’
    and you’re good to go.

  7. Arsh says:

    bro, i’m not able to create any GUI. However, I’m getting test result in visual studio. can somebody pls help on this………..

  8. Benjamin says:

    Where is the data sourced from and how is it identified as real or fake?

  9. Rajesh says:

    how can we give new data ..which is a current news and show its output like is fake or it real..how can we give new news article and say that it is fake or not!!

    • Sharan says:

      maybe this code can help you

      input_data = [input()]
      vectorized_input_data = tfidf_vectorizer.transform(input_data)
      prediction = pac.predict(vectorized_input_data)
      print(prediction)

    • DataFlair Team says:

      You can pickle or save the model and after preprocessing the new data you can predict the fake or real out of new processed data.

  10. rusa says:

    I got an invalid syntax error in this way.

    • DataFlair Team says:

      Please let us know on which line you are getting the error.
      Also, post the error message and complete stacktrace, we will look into the same.

  11. OMRI says:

    how can i save the model and use it to predict new articles

  12. Jyoti Singh says:

    I tried this project. Really enjoyed doing it. Thank You for the clear explaination

  13. Vinita Baj says:

    The fifth step is giving me a value error also saying that np.nan is an invalid document for the fake news prediction.please help me out with this..

  14. MEHTA RAJ VINODBHAI says:

    Thank you for sharing.

  15. dedeepya says:

    Could you plz explain me i was getting error at this syntax line Fit and transform train set
    and how to fix it

  16. shiva says:

    getting error while executing pac.fit and pac.predict command

  17. Nikhil says:

    could you please explain the working of this project

    • DataFlair Team says:

      After train test split, we preprocess the news with Tfidfvectorizer that computes the word counts, IDF values, and Tf-idf scores all at once.
      After fitting and transforming the input we pass it to our algorithm. As our model is trained we fit our model and predict test set to get the accuracy score of our model.

  18. RAHUL SRIVASTAV says:

    How can we increase the data set , like I have other csv with 1,20,000 values of the same format except the id or key …. the programme doesn’t work with other csv file of same format except instead of unique id it has 0,1,2,3 …

    Can you please send a source code so that it can be trained with other data sets too

    • shivam gupta says:

      bro can you share data to me on google drive

    • axat says:

      Hii Rahul Shrivastav ,
      The dataset they are using contains several fields like label, text and title .. you make sure that your dataset contains those fields and you are good to go.

  19. axat says:

    No it’s correct I tried it and it worked for me

  20. Ogutu says:

    I cant execute the following code “C:\Users\DataFlair>jupyter lab” I keep getting the following error “‘jupyter’ is not recognized as an internal or external command,
    operable program or batch file.” whats the problem and whats the solution?

    • DataFlair Team says:

      It is because of “Environment Variables”

      Please follow these steps:

      Open the folder where you downloaded “python-3.8.2-amd64.exe” setup or any other version of python package

      Double click on “python-3.8.2-amd64.exe’

      Click “Modify”

      You will see “Optional features”

      Click “next”

      Select “Add python to environment variables”

      Click “install”

      Then u can run jupyter from any desired folder

      E.g open “cmd” command prompt

      Type :

      E:

      E:>jupyter notebook

      It will get started without showing ‘Jupyter’ is not recognized

  21. Akshaya says:

    I’m having a value error in 5 code Can anyone help me to solve this error?

  22. Kavya says:

    Can anyone one please tell me about the preprocessing part. And how can i give an article to this system and get the output whether it is fake or real . Please help me…..

    • DataFlair Team says:

      Here we preprocess the news with Tfidfvectorizer that computes the word counts, IDF values, and Tf-idf scores all at once.
      You can pickle or save the model and after preprocessing the new data you can predict the fake or real from newly processed data.

  23. Prajwal says:

    Can we use pycharm instead of jupyter ?

  24. Shreya Dharmarajan says:

    Does anybody here knows how to download jupyter lab? And, how to access it from command prompt? I tried but it says it cannot find the file. Can someone help me?

  25. Kavya says:

    could you please give me the code to plot the confusion matrix??

    • iGorDitto says:

      def plot_confusion_matrix(y_true, y_pred, classes, normalize=False,
      title=None, cmap=plt.cm.Blues):

      if not title:
      if normalize:
      title = ‘Normalized confusion matrix’
      else:
      title = ‘Confusion matrix, without normalization’

      cm = confusion_matrix(y_true, y_pred)
      classes = classes[unique_labels(y_true, y_pred)]
      if normalize: cm = 100 * cm.astype(‘float’) / cm.sum(axis=1)[:, np.newaxis]

      fig, ax = plt.subplots()
      im = ax.imshow(cm, interpolation=’nearest’, cmap=cmap)
      ax.figure.colorbar(im, ax=ax)

      ax.set(xticks = np.arange(cm.shape[1]),
      yticks = np.arange(cm.shape[0]),
      xticklabels = classes, yticklabels=classes,
      title = title,
      ylabel = ‘True label’,
      xlabel = ‘Predicted label’)

      plt.setp(ax.get_xticklabels(), rotation=45, ha=’right’,
      rotation_mode=’anchor’)

      fmt = ‘.1f’ if normalize else ‘d’
      thresh = cm.max() / 2
      for i in range(cm.shape[0]):
      for j in range(cm.shape[1]):
      ax.text(j, i, format(cm[i,j], fmt),
      ha=’center’, va=’center’,
      color=’white’ if cm[i,j] > thresh else ‘black’)
      fig.tight_layout()
      return ax

  26. Panagiotis Goulidakis says:

    I preprocessed the datadue to some criteria before vectorization (splitted unified words with ‘ – ‘ e.g. counter-terrorist , removed words that are non alphabetical and words with length <4,used PorterStemmer) used Stratified K Fold cross validation with 3 to 5 splits on the dataset and i got an average accuracy of approximately 97%

    But i worked on jupyter notebook not on lab

  27. Panagiotis Goulidakis says:

    Turns out my accuracy is at 92.5 . Before , I falsely vectorized all the dataset before the split

  28. Reema Save says:

    If we train the data with the given dataset and test run it over the completely new data. Will the accuracy be the same?

    • DataFlair Team says:

      Accuracy will be similar, but it depends on many factors, I recommend you try running the project.

  29. Mujib khan says:

    What types of Algorithms are used in this project

    • DataFlair Team says:

      In this machine learning project we are using PassiveAggressiveClassifier the Passive-Aggressive algorithms are generally used for large-scale learning. This algo is one of the few ‘online-learning algorithms‘. In online ml algorithms, the input data comes in sequential order and then model is updated step-by-step, as opposed to batch machine learning, where the entire training dataset is used in one shot. This is very useful in situations where there is a huge amount of data and new data is being added every continuously.

  30. Aida says:

    I got ValueError in Step 6 while trying to run y_pred=pac.predict(tfidf_test). Can someone help to explain why do i received error at this step?

Leave a Reply

Your email address will not be published. Required fields are marked *