In the past few articles, we’ve covered some basic image processing concepts and the OpenCV library. Let’s use that knowledge to solve some real-life problems. Today we’re going to make a computer vision project that will allow cars to enter a particular object or garage based on their plate number. Also, we will show you how to use two additional libraries that will come in handy in this project.
This bundle of e-books is specially crafted for beginners.
Everything from Python basics to the deployment of Machine Learning algorithms to production in one place.
Become a Machine Learning Superhero TODAY!
In this article, we cover:
- Requirements and installation
- Explaining the code
- Final result
If you want to follow along with us make sure you have installed python3.6 or newer. Also you will need to install: OpenCV (for image processing), Easyocr (for reading from image) and Imutils (for contours manipulations). The best way to do it is to run the following code in your command line.
pip install easyocr pip install imutils pip install opencv-python
The objective is to define a list of cars’ plate numbers that are allowed to enter, and based on the car pictures we provide to our algorithm, it will tell us whether or not access was granted to that car. Let’s now explain how we plan on doing that. The idea is to find the contours of the image, and then based on statistical odds, the license plate will be the only polygon-like contour defined with 4 points in an image.
This varies from picture to picture, but it is sensible to assume that when solving this problem in real life, all the pictures will be taken from the same angle. After we’ve detected the license plate, we will use EasyOCR to read the letters and numbers from the picture and store them as a string. All we have to do afterward is to check if the plate number matches existing plate numbers. Now let’s get to coding.
3. Explaining the code
Let’s first load an image and see its license plate.
import numpy as np import cv2 import easyocr import imutils from google.colab.patches import cv2_imshow #Loading the picture img = cv2.imread('car1.png') cv2_imshow(img)
As we can see the plate is ‘PL8REC’. Now we will define a list with this plate number, and two additional random numbers. After we are done with the algorithm, this car should have access granted. Later we are going to remove this car from the list and see what happens.
valid_licence_plates = ['PL8RSC', 'SP34AS', 'TEA34S']
As we already said the idea is to find all the contours on this image and then find the one defined that looks like a polygon and is defined with 4 points in space. Before we do that, we are going to apply Gaussian blur to the image.
The reason being is that we can reduce the number of total edges found by blurring the image, while still preserving the important part of the image, which is the car. Let’s see the difference between normal and blurred images when we perform Canny edge detection.
#Converting the image to gray-scale image_gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) #Blurring the image with a 11x11 mask blured_image = cv2.GaussianBlur(image_gray,(11,11),0) #Finding the edges of the non blurred image edge_image_blur = cv2.Canny(blured_image,30,100) #Finding the edges of blurred image edge_image_normal = cv2.Canny(image_gray,30,100) cv2_imshow(edge_image_normal) cv2_imshow(edge_image_blur)
As we can see the difference between the total number of edges found is quite big. We should note that you have to be careful when choosing the parameters for Gaussian blur since choosing too big of a mask can result in you losing important information. Let’s now find all the contours based on this image.
#Finding points of contours key_points =cv2.findContours(edge_image_blur,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE) #defining contours from keypoints contours = imutils.grab_contours(key_points) #drawing contours cv2.drawContours(img, contours, -1, (0,255,0), 3) cv2_imshow(img)
Parameter cv2.RETR_LIST specifies that we want all the contours, without any certain hierarchy and cv2.CHAIN_APPROX_SIMPLE returns the minimum number of points needed to construct the contour. Using Imutils we will assemble contours based on those points, and lastly using cv2.drawContours we will draw the contours on the original image.
Okay, now that we have the contours of the image we should try to find that small rectangular one that represents our license plate. As we previously mentioned we are going to assume that our image has only one rectangular shape in the top twenty contours by area, starting from biggest to smallest. So let’s sort them in that order.
After that, we are going to need the cv2.approxPolyDP function to approximate every contour with an n-vertex polyleader. Naturally, when we come to a rectangular approximation, the number of vertices we find will be four. Those four points in space will represent the position of our plate.
contours = sorted(contours, key=cv2.contourArea, reverse=True)[:20] plate_location = None for cnt in contours: sqaure_approx = cv2.approxPolyDP(cnt, 10, True) if len(sqaure_approx) == 4: plate_location = sqaure_approx print(plate_location) #Output: #[[[299 281]] #[[445 278]] #[[449 305]] #[[303 311]]]
Great! We found our license plate. Now everything we need to do is to crop the image in those positions to extract license plates. We’ll find the maximum and minimum positions for the x and y-axis and those values will represent our borders.
x1, x2 = min(plate_location[:,0][:,1]), max(plate_location[:,0][:,1]) y1, y2 = min(plate_location[:,0][:,0]), max(plate_location[:,0][:,0]) cropped_image = image[x1:x2, y1:y2] cv2_imshow(cropped_image)
After that let us define something we are going to call the reader. The reader will be the object of class Reader, which takes only one argument [“en”], which tells the reader what language we have in our image. The reader is the class from the third-party library called EasyOCR where OCR stands for Optical Character Recognition. It’s very simple to use and we are suggesting you read about it further.
We are telling the reader to read from the image above using the method read_text. Read_text output returns a very unusual list of tuples with positions of text, text read, the certainty of is it valid read etc. What we are interested in is the text which is placed on the second to last position in every tuple of the list. To extract and concatenate all of the strings that we read we are going to use the map function.
x1, x2 = min(plate_location[:,0][:,1]), max(plate_location[:,0][:,1]) y1, y2 = min(plate_location[:,0][:,0]), max(plate_location[:,0][:,0]) cropped_image = image[x1:x2, y1:y2] cv2_imshow(cropped_image) reader = easyocr.Reader(['en']) all_reads = reader.readtext(cropped_image) license_plate = "".join(map(lambda read: read[-2], all_reads)) print(license_plate) #Output #PL8REC
Now that we have the license plate number, we can display our answer on the screen depending on if that number is on our list. We are going to draw a rectangle around the plates and tell the driver if his access is allowed. If he is on the list the message will display green and in the opposite case, it will display red. As we know that we have him on the list we are expecting a green message.
if license_plate in valid_licence_plates: cv2.rectangle(image, pt1=(y1, x1), pt2=(y2, x2), color=(0, 255, 0), thickness = 5) cv2.putText(image, 'Access Allowed', (y1 - 30,x2 + 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2, cv2.LINE_AA) else: cv2.rectangle(image, pt1=(y1, x1), pt2=(y2, x2), color=(0, 0, 255), thickness = 5) cv2.putText(image, 'Access Denied', (y1 - 30,x2 + 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2, cv2.LINE_AA)
And if for some reason we remove this car from our list. We’ll display something like this.
As we can see we’ve managed to build this small but useful app very simply. OpenCV is just great in real-life projects. It has so many applications in security, sports, industry, fun etc. It can be implemented with microcontrollers in projects like this when you need to steer some electric device, like a ramp or a door. The possibilities are endless. In the next couple of articles, we are going to explore some more great things including importing live video footage for our computer vision projects. So stay tuned.
Author at Rubik's Code
Stefan Nidzovic is a student at Faculty of Technical Science, at University of Novi Sad. More precisely, department of biomedical engineering, focusing mostly on applying the knowledge of computer vision and machine learning in medicine. He is also a member of “Creative Engineering Center”, where he works on various projects, mostly in computer vision.
Author at Rubik's Code
Miloš Marinković is a student of Biomedical Engineering, at the Faculty of Technical Sciences, University of Novi Sad. Before he enrolled at the university, Miloš graduated from the gymnasium “Jovan Jovanović Zmaj” in 2019 in Novi Sad. Currently he is a member of “Creative Engineering Center”, where he was involved in a couple of image processing and embedded electronic projects. Also, Miloš works as an intern at BioSense Institute in Novi Sad, on projects which include bioinformatics, DNA sequence analysis and machine learning. When he was younger he was a member of the Serbian judo national team and he holds the black belt in judo.