My FeedDiscussionsHeadless CMS
New
Sign in
Log inSign up
Learn more about Hashnode Headless CMSHashnode Headless CMS
Collaborate seamlessly with Hashnode Headless CMS for Enterprise.
Upgrade ✨Learn more
Automatically Filter Image Uploads According to their NSFW Score

Automatically Filter Image Uploads According to their NSFW Score

Vincent's photo
Vincent
·Jun 10, 2020

When developing and later administrating a web or mobile application that deals with lot of image uploads, shares, likes and so forth, you would want to setup a smart automated mechanism that moderate your users image, GIF or video uploads on the fly and act accordingly when NSFW content is being detected such as rejecting, flagging or even censoring (i.e. apply a blur filter to) the target picture or video frame being suspected.

Since images and user-generated content dominate the Internet today, filtering NSFW contents becomes an essential component of Web and mobile applications. NSFW is the acronym for not safe for work and refer to images or GIF frames that contain adult content, violence or gory details.

In the scale of the modern web, it is impractical to rely on a human operator to moderate each image upload one after one. Instead, Computer Vision, a field of Artificial Intelligence is used for such a task. Computers are now able to automatically classify NSFW image content with greater precision & speed.

In this post, you will learn how to make use of the PixLab API to detect and filter unwanted contents (GIF included) and make decisions based on the score number from PixLab to blur the image in question or delete it.

The PixLab API

pix.png

PixLab is a Machine Learning SaaS platform which offer Computer Vision and Media Processing APIs either via a straightforward HTTP RESTful API or offline SDK via the SOD Embedded CV library. PixLab HTTP API feature set includes but not not limited to:

  • Over 130 Machine Vision & Media Processing API endpoints.
  • State-of-the-art document scan algorithms such as Passports, ID cards via the /docscan API endpoint, face detection (/facedetect), facial landmarks extraction (/facelandmarks), facial recognition (/facecompare), NSFW content analysis (/nsfw) and many more .
  • 1TB of media storage served from a highly available CDN over SSL.
  • On the fly image compression, encryption and tagging.
  • Proxy for AWS S3 and other cloud storage providers.

Invoking the NSFW Endpoint

According to the PixLab documentation . The purpose of NSFW is to detect not suitable for work (i.e. nudity & adult) content in a given image or video frame. NSFW is of particular interest, if mixed with some media processing API endpoints like blur , encrypt or <a mogrify to censor images on the fly according to their nsfw score. This can help the developer automate things such as filtering user's upload.

HTTP Methods

The NSFW endpoint support both GET and POST HTTP methods which mean that you can send either a direct link (public URL) to the target image to scan via GET or upload the image directly from your HTML form or Web app for example to the NSFW endpoint via POST. POST on the other side is more flexible and support two content types: multipart/form-data and application/json.

API Response

The NSFW API endpoint always return a JSON object (i.e. application/json) for each HTTP request whether successful or not. The following are the JSON fields returned in response body:

  • Status
  • Score
  • Error

The field of interest here is the score value. The more this value approaches 1, the more your picture is highly nsfw. In which case, you have to act accordingly such as rejecting the picture or better, apply a blur filter (see example below) to the picture in question.

Real World Code Sample

Given a freshly uploaded image, perform nudity & adult content detection at first and if the nsfw score is high enough, apply a blur filter on the target picture. A typical (highly nsfw) blurred image should look like the following after processing:

blur_nsfw.jpg

Such blurred image was obtained via the following Python script :

import requests
import json

# Target Image: Change to any link (Possibly adult) you want or switch to POST if you want to upload your image directly, refer to the sample set for more info.
img = 'https://i.redd.it/oetdn9wc13by.jpg' 
# Your PixLab API key
key = 'PIXLAB_API_KEY'

# Blur an image based on its NSFW score
req = requests.get('https://api.pixlab.io/nsfw',params={'img':img,'key':key})
reply = req.json()
if reply['status'] != 200:
    print (reply['error'])
elif reply['score'] < 0.5 :
    print ("No adult content were detected on this picture")
else:
    # Highly NSFW picture
    print ("Censuring NSFW picture...")
    # Call blur with the highest possible radius and sigma
    req = requests.get('https://api.pixlab.io/blur',params={'img':img,'key':key,'rad':50,'sig':30})
    reply = req.json()
    if reply['status'] != 200:
        print (reply['error'])
    else:
        print ("Censured image: "+ reply['link'])

Similarly, the same code logic implemented in PHP :

<?php
/*
 * PixLab PHP Client which is just a single class PHP file without any dependency that you can get from Github
 * https://github.com/symisc/pixlab-php 
 */
require_once "pixlab.php";

# Target Image: Change to any link (Possibly adult) you want or switch to POST if you want to upload your image directly, refer to the sample set for more info.
# The target API endpoint we'll be using here: nsfw (https://pixlab.io/cmd?id=nsfw).
$img = 'https://i.redd.it/oetdn9wc13by.jpg';

# Your PixLab API key
$key = 'PIXLAB_API_KEY';

# Censor an image according to its NSFW score
$pix = new Pixlab($key);
/* Invoke NSFW */
if( !$pix->get('nsfw',array('img' => $img)) ){
    echo $pix->get_error_message();
    die;
}
/* Grab the NSFW score */
$score = $pix->json->score;
if( $score < 0.5 ){
    echo "No adult content were detected on this picture\n";
}else{
    echo "Censoring NSFW picture...\n";
    /* Call blur with the highest possible radius and sigma */
    if( !$pix->get('blur',array('img' => $img,'rad' => 50,'sig' =>30)) ){
        echo $pix->get_error_message();
    }else{
        echo "Blurred Picture URL: ".$pix->json->link."\n";
    }
}
?>

The code above is self-explanatory and regardless of the programming language, the logic is always same. We made a simple HTTP GET request with the input image URL as a sole parameter. Most PixLab endpoints support multiple HTTP methods so you can easily switch to POST based requests if you want to upload your images & videos directly from your mobile or web app for anlaysis. Back to our sample, only two API endpoints are needed for our moderation task:

  1. First, NSFW is the analysis endpoint that must be called first. It does perform nudity & adult content detection and return a score value between 0..1. The more this value approaches 1, the more your picture/frame is highly nsfw.
  2. The Blur endpoint is called later only if the nsfw score value returned earlier is greater than certain threshold. In our case, it is set to 0.5. The blur endpoint return either a direct link (URL) to the blurred image which is stored on the PixLab storage cluster or on your own AWS S3 bucket if you connect your bucket (i.e. Access/Secret S3 key) from the PixLab dashboard . This feature should give full control over your processed media files.

Conclusion

Surprisingly, automatically filtering unwanted contents is straightforward for the average web developer or site administrator who may lack technical machine learning skills thanks to open computer vision technologies such the one provided by PixLab. Find out more code samples on github.com/symisc/pixlab and github.com/symisc/sod if your are C/C++ developer.