stub Applying AI to Real-Time Video Processing: The Basics and More - Unite.AI
Connect with us

Thought Leaders

Applying AI to Real-Time Video Processing: The Basics and More

mm

Published

 on

By Maksym Tatariants, Data Science Engineer at MobiDev.

There’s nothing new about using artificial intelligence (AI) in video processing. If you look beyond image processing—it’s one of the most common use cases for AI. And just like image processing, video processing uses established techniques like computer vision, object recognition, machine learning, and deep learning to enhance this process. 

Whether you use computer vision and NLP in video editing and generation, object recognition in video content auto-tagging tasks, machine learning to streamline AI video analysis, or deep learning to expedite real-time background removal, the use cases continue to grow by the day.

Keep reading to learn what approach you can take when it comes to using AI in video processing.

The Basics of Real-Time Video Processing

Let’s start with the basics. Real-time video processing is an essential technology in surveillance systems using object and facial recognition. It’s also the go-to process that powers AI visual inspection software in the industrial sector.

So, how does video processing work? Video processing involves a series of steps, which include decoding, computation, and encoding. Here’s what you need to know:

  • Decoding: The process required to convert a video from a compressed file back to its raw format.
  • Computation: A specific operation performed on a raw video frame.
  • Encoding: The process of reconverting the processed frame back to its original compressed state.

Now, the goal of any video processing task is to complete these steps as quickly and accurately as possible. The easiest ways to accomplish this include: working in parallel and optimizing the algorithm for speed. In simple terms? You need to leverage file splitting and pipeline architecture.

What Is Video File Splitting?

Video file splitting allows the algorithms to work simultaneously, allowing them to use slower, more accurate models. This is accomplished by splitting videos into separate parts that are then processed at the same time.

You can think of video splitting as a form of virtual file generation rather than sub-file generation. 

Despite this, video file splitting isn’t the best option for real-time video processing. Why exactly? This process makes it difficult for you to pause, resume, and rewind a file while it's being processed.

What is Pipeline Architecture?

The other option is pipeline architecture. This process works to split and parallelize the tasks being performed during processing, rather than outright splitting of the video.

Here’s a quick example of what pipeline architecture looks like in practice, and how it can be used in a video surveillance system to detect and blur faces in real-time.

In this example, the pipeline has split the tasks into decoding, face detection, face blurring, and encoding. And if you want to improve the pipeline’s speed, you can use pipeline deep learning techniques.

Decoding and Encoding Explained

What about decoding and encoding? There are two ways you complete these processes: software and hardware.

You may already be familiar with the concept of hardware acceleration. This process is made possible thanks to decoders and encoders installed in the latest NVIDIA graphics cards, as well as the CUDA cores.

So, what options do you have available to you when it comes to hardware acceleration for the encoding and decoding processes? Here are some of the more popular options:

  • Compile OpenCV With CUDA Support: Compiling OpenCV with CUDA optimizes both decoding and any pipeline calculations that use OpenCV. Keep in mind—you will need to write them in C++ since the Python wrapper doesn’t support this. But in situations that require both decoding and numeric calculations with a GPU without copying from CPU memory, it’s still one of the better choices available.
  • Compile FFmpeg or GStreamer with NVDEC/NVENC Codecs Support: Another option is to use the built-in NVIDIA decoder and encoder included with custom installations of FFmpeg and Gstreamer. However, we suggest using FFmpeg if possible since it requires less maintenance. Also, most libraries are powered by FFmpeg, meaning you will automatically boost the library’s performance by replacing it.
  • Use NVIDIA Video Processing Framework: The final option is to use a Python wrapper to decode the frame directly into a PyTorch tensor on the GPU. This option removes extra copying from the CPU to the GPU.

Face Detection and Blurring

Object detection models (SSDs or RetinaFace) are a popular option for completing face detection. These solutions work to locate the human face in a frame. And based on our experience, we tend to prefer the Caffe Face tracking and TensorFlow object detection models since they provided the best results. Additionally, both are available using the OpenCV library dnn module.

So, what’s next after a face has been detected? Next, the Python and OpenCV-based system will reveal bounding boxes and detection confidence. Finally, a blurring algorithm is applied to cropped areas.

How Can You Build AI-Powered Live Video Processing Software?

It’s no secret that video processing, the codecs that power it, and both the hardware and software required are fairly technical in nature.

Still, that doesn’t mean you can’t use these tools to build your own live video processing software. 

Here is a brief breakdown of what you need to do:

  1. Start by adjusting your pre-trained neural network to complete the tasks required.
  2. Configure your cloud infrastructure to handle video processing and scale as needed.
  3. Build a software lawyer to condense the process and integrate specific use cases like mobile applications and admin or web panels.

Developing an MVP for similar video processing software can take up to four months using a pre-trained neural network and simple application layers. However, the scope and timeline depend on each project's details. In most cases, it makes sense to start with Proof of Concept development to explore the project specifics and find an optimal flow.

Maksym is keen on gaining new insights and experience in Data Science and Machine Learning. He is particularly interested in Deep Learning-based technologies and their application to business use cases.