Latest YouTube Video
Saturday, October 21, 2017
I have a new follower on Twitter
Bonnie Thornton
Нello ;) I'm lоoking for a sex. Cаll me, сhеck http://ift.tt/2yJsnLS
Following: 98 - Followers: 29
October 21, 2017 at 05:00PM via Twitter http://twitter.com/Gabrielle1222
Anonymous make-out - m4w
from Google Alert - anonymous http://ift.tt/2l8r3NE
via IFTTT
Anonymous of Tokyo
from Google Alert - anonymous http://ift.tt/2zqi7Ft
via IFTTT
Maybe It's Time to Do Away with Anonymous Reviews
from Google Alert - anonymous http://ift.tt/2hUnBBn
via IFTTT
New Rapidly-Growing IoT Botnet Threatens to Take Down the Internet
from The Hacker News http://ift.tt/2zF8PGz
via IFTTT
Incogneato Anonymous Box
from Google Alert - anonymous http://ift.tt/2hTU7TU
via IFTTT
Accountant
from Google Alert - anonymous http://ift.tt/2yF9W9i
via IFTTT
Working At Anonymous Production - Ask a Question
from Google Alert - anonymous http://ift.tt/2xddvSt
via IFTTT
Lynds Dark Nebula 183
Friday, October 20, 2017
Alcoholics Anonymous – Di and Stu
from Google Alert - anonymous http://ift.tt/2yGcFkC
via IFTTT
Node author always anonymous
from Google Alert - anonymous http://ift.tt/2xUcazC
via IFTTT
I have a new follower on Twitter
Soren S Hansen
Host of Tribe Of Entrepreneurs podcast. Interview with entrepreneurs to help you take action https://t.co/GGbU46jxp9 To get featured https://t.co/dla880FSyT
Copenhagen, Denmark
https://t.co/OHdoXKjr2b
Following: 6111 - Followers: 7084
October 20, 2017 at 03:47PM via Twitter http://twitter.com/sorenshansen
Flix Anonymous 45 - CKNW Original Podcasts
from Google Alert - anonymous http://ift.tt/2xckLOs
via IFTTT
[FD] SSD Advisory – Endian Firewall Stored From XSS to Remote Command Execution
Source: Gmail -> IFTTT-> Blogger
[FD] [RCE] TP-Link Remote Code Execution CVE-2017-13772
Source: Gmail -> IFTTT-> Blogger
I have a new follower on Twitter
Veille Social Media
EMarketing - Veille quotidienne des réseaux sociaux : #FacebookExpert #TwitterExpert #CM #SocialMedia #Ereputation #Com #Mkg
Paris
Following: 15187 - Followers: 30715
October 20, 2017 at 01:07PM via Twitter http://twitter.com/Marie_Veille
[FD] CVE-2017-12579 Local root privesc in Hashicorp vagrant-vmware-fusion 4.0.24
Source: Gmail -> IFTTT-> Blogger
ISS Daily Summary Report – 10/19/2017
from ISS On-Orbit Status Report http://ift.tt/2xbpQGG
via IFTTT
Unpatched Microsoft Word DDE Exploit Being Used In Widespread Malware Attacks
from The Hacker News http://ift.tt/2yBrncn
via IFTTT
Anonymous Feedback Survey
from Google Alert - anonymous http://ift.tt/2groNeR
via IFTTT
Anonymous user a25ec2
from Google Alert - anonymous http://ift.tt/2ywjk1a
via IFTTT
Get maximum number of outputs from anonymous function
from Google Alert - anonymous http://ift.tt/2gqUjJY
via IFTTT
Kobie Marketing
from Google Alert - anonymous http://ift.tt/2ywo2vO
via IFTTT
Thursday, October 19, 2017
Anonymous user d84c04
from Google Alert - anonymous http://ift.tt/2x9ndFv
via IFTTT
Google Play Store Launches Bug Bounty Program to Protect Popular Android Apps
from The Hacker News http://ift.tt/2xSO4ty
via IFTTT
Architect at waa (we architech anonymous)
from Google Alert - anonymous http://ift.tt/2yvbt3X
via IFTTT
Implications for Campus Racial Climate
from Google Alert - anonymous http://ift.tt/2gnklhm
via IFTTT
Ravens: Mike Wallace (back) not practicing Thursday; Jeremy Maclin (back) wearing red, no-contact jersey (ESPN)
via IFTTT
I have a new follower on Twitter
Brian Connors
Co-Founder @AllSearch1 National Sales #Recruiting, Inc 5000. #entrepreneur #jobsearch Family. Outdoors. Techie.
York, PA
https://t.co/WFP8X4EXZ6
Following: 1304 - Followers: 1206
October 19, 2017 at 01:08PM via Twitter http://twitter.com/AllSearchBrian
Geisel Productions and Anonymous Content Win Best of Show for
from Google Alert - anonymous http://ift.tt/2hPNIct
via IFTTT
Anonymous Friend
from Google Alert - anonymous http://ift.tt/2yUYZDq
via IFTTT
Alcoholics anonymous code of ethics
from Google Alert - anonymous http://ift.tt/2ipn7Xx
via IFTTT
Sales Administrator
from Google Alert - anonymous http://ift.tt/2yxV3oR
via IFTTT
Live For Each Moon and DJ Anonymous (Total Stasis) – Library of Sounds Past Present and Future ...
from Google Alert - anonymous http://ift.tt/2yxBYTZ
via IFTTT
fastq-anonymous 1.0.0
from Google Alert - anonymous http://ift.tt/2zzEFEw
via IFTTT
Hulu anonymous proxy fix
from Google Alert - anonymous http://ift.tt/2zlMAEJ
via IFTTT
I have a new follower on Twitter
Bitcoin Auto Trader
Learn the 📟🎧 Secret to Earning $1000's💸🎡💷🎉weekly in #Bitcoin on🌴🏆 Autopilot🏕 with #autotrading🖍📥Sign up Free!🖎👇
https://t.co/1MV4T5lItX
Following: 1806 - Followers: 1073
October 19, 2017 at 03:37AM via Twitter http://twitter.com/BitcoinAutoCash
M51: The Whirlpool Galaxy
Wednesday, October 18, 2017
Download Link hesitates download for anonymous users
from Google Alert - anonymous http://ift.tt/2yrcNEV
via IFTTT
Anonymous function expressions aren't covered for 'infer from usage'
from Google Alert - anonymous http://ift.tt/2yw77He
via IFTTT
Anonymous user e3de79
from Google Alert - anonymous http://ift.tt/2gua0nF
via IFTTT
Anonymous Feedback in Front Page
from Google Alert - anonymous http://ift.tt/2kZR8yc
via IFTTT
Ravens: Brandon Williams (foot) will practice Wednesday; missed last 4 games (ESPN)
via IFTTT
ISS Daily Summary Report – 10/17/2017
from ISS On-Orbit Status Report http://ift.tt/2giI9mB
via IFTTT
Enable Google's New "Advanced Protection" If You Don't Want to Get Hacked
from The Hacker News http://ift.tt/2ik5aKe
via IFTTT
[FD] SEC Consult SA-20171018-1 :: Multiple vulnerabilities in Linksys E-series products
Source: Gmail -> IFTTT-> Blogger
Best anonymous image board
from Google Alert - anonymous http://ift.tt/2zwX0lT
via IFTTT
Anonymous - Sous Chef
from Google Alert - anonymous http://ift.tt/2x4tYrX
via IFTTT
Anonymous user 65b2e5
from Google Alert - anonymous http://ift.tt/2zxjX8b
via IFTTT
jQuery (or any core scripts)
from Google Alert - anonymous http://ift.tt/2x4UFgg
via IFTTT
Take Stock in Children Receives $75000 Gift from Anonymous Donor
from Google Alert - anonymous http://ift.tt/2zxjSBp
via IFTTT
Tuesday, October 17, 2017
Wisconsin is the most anonymous CFP contender
from Google Alert - anonymous http://ift.tt/2kVOKZf
via IFTTT
Stuck in Captive Portal with Anonymous Authentication
from Google Alert - anonymous http://ift.tt/2illEl1
via IFTTT
Anonymous user f9da84
from Google Alert - anonymous http://ift.tt/2ySMkk8
via IFTTT
Facebook buys anonymous teen compliment app TBH
from Google Alert - anonymous http://ift.tt/2ywYQVy
via IFTTT
[FD] SSD Advisory – Linux Kernel AF_PACKET Use-After-Free
Source: Gmail -> IFTTT-> Blogger
[FD] SSD Advisory – Ikraus Anti Virus Remote Code Execution
Source: Gmail -> IFTTT-> Blogger
[FD] [CVE-2017-14322] Interspire Email Marketer - Remote Admin Authentication Bypass
Source: Gmail -> IFTTT-> Blogger
[FD] SSD Advisory – FiberHome Directory Traversal
Source: Gmail -> IFTTT-> Blogger
📉 Ravens slide down four spots to No. 22 in Week 7 Power Rankings (ESPN)
via IFTTT
Anybody remember the anonymous comments
from Google Alert - anonymous http://ift.tt/2xN88Ob
via IFTTT
I have a new follower on Twitter
Lee Gilbert
Father, author, economist, startup mentor, CEO LG Advisors. Insatiably curious. #Blockchain #Cryptocurrency
Chicago, IL
Following: 70070 - Followers: 109731
October 17, 2017 at 11:46AM via Twitter http://twitter.com/leetheonlyone
Templum Antiquum Ad Fontem Aegerium (Sant'Urbano alla Caffarella, Rome)
from Google Alert - anonymous http://ift.tt/2ifTWGv
via IFTTT
ISS Daily Summary Report – 10/16/2017
from ISS On-Orbit Status Report http://ift.tt/2hM4DwB
via IFTTT
Dangerous Malware Allows Anyone to Empty ATMs—And It’s On Sale!
from The Hacker News http://ift.tt/2kWp7HF
via IFTTT
Learn Ethical Hacking — Get 8 Online Courses (With Sample Videos) For Just $29
from The Hacker News http://ift.tt/2xLi2jv
via IFTTT
[FD] SEC Consult SA-20171017-0 :: Cross site scripting in Webtrekk Pixel tracking component
Source: Gmail -> IFTTT-> Blogger
Microsoft Kept Secret That Its Bug-Tracking Database Was Hacked In 2013
from The Hacker News http://ift.tt/2iiSfbf
via IFTTT
Anonymous Bullying Report
from Google Alert - anonymous http://ift.tt/2ifwAAP
via IFTTT
University of Oregon Receives $50 Million From Anonymous Donor
from Google Alert - anonymous http://ift.tt/2gLz5HC
via IFTTT
I have a new follower on Twitter
Andy Cline
ATTN: do-gooders. Solar Marketing, cleantech at https://t.co/R0Ae93Ccxw . Co-venture network for NGOs TBA at https://t.co/AC151TLFY0.
https://t.co/R0Ae93kB8W
Following: 4217 - Followers: 4516
October 17, 2017 at 06:31AM via Twitter http://twitter.com/apuforlife1
The Relaxed Satanist
from Google Alert - anonymous http://ift.tt/2yq6RJP
via IFTTT
Serious Crypto-Flaw Lets Hackers Recover Private RSA Keys Used in Billion of Devices
from The Hacker News http://ift.tt/2ymNfJ1
via IFTTT
Haumea of the Outer Solar System
Deleted account still shows in the comments instead of Anonymous
from Google Alert - anonymous http://ift.tt/2gfCbCL
via IFTTT
Monday, October 16, 2017
anonymous's post
from Google Alert - anonymous http://ift.tt/2x10YBy
via IFTTT
Facebook acquires anonymous polling app targeted at teens
from Google Alert - anonymous http://ift.tt/2ysBIXO
via IFTTT
Anonymous function with vector instead of multiple inputs
from Google Alert - anonymous http://ift.tt/2yqreXt
via IFTTT
Ravens: John Harbaugh defends OC Marty Mornhinweg and struggling offense (ESPN)
via IFTTT
Anonymous Advertiser Id:63409
from Google Alert - anonymous http://ift.tt/2ylVlBI
via IFTTT
Sales Executive
from Google Alert - anonymous http://ift.tt/2geFt9w
via IFTTT
I have a new follower on Twitter
Jeff Jackson 📱🚴🏾
Online Marketing🖥, Digital Marketing, Video Content, Business Automation, Linux🐧, Web integration, Cyclist🚴🏾, Greyhounds, Scuba Diving🐙🐬🐠, Minimalist
https://t.co/o9nbubMv6P
Following: 6055 - Followers: 6395
October 16, 2017 at 02:41PM via Twitter http://twitter.com/jefjxn
Ravens drop passes and playoff hopes in loss to Bears - Jamison Hensley (ESPN)
via IFTTT
Anonymous (Dark)
from Google Alert - anonymous http://ift.tt/2yoSLbC
via IFTTT
Hackers Use New Flash Zero-Day Exploit to Distribute FinFisher Spyware
from The Hacker News http://ift.tt/2zuOhR4
via IFTTT
Yet Another Linux Kernel Privilege-Escalation Bug Discovered
from The Hacker News http://ift.tt/2hIuTaY
via IFTTT
ISS Daily Summary Report – 10/13/2017
from ISS On-Orbit Status Report http://ift.tt/2kUyasQ
via IFTTT
How A Drive-by Download Attack Locked Down Data of this City for 4 Days
from The Hacker News http://ift.tt/2xLoruG
via IFTTT
Raspberry Pi: Deep learning object detection with OpenCV
A few weeks ago I demonstrated how to perform real-time object detection using deep learning and OpenCV on a standard laptop/desktop.
After the post was published I received a number of emails from PyImageSearch readers who were curious if the Raspberry Pi could also be used for real-time object detection.
The short answer is “kind of”…
…but only if you set your expectations accordingly.
Even when applying our optimized OpenCV + Raspberry Pi install the Pi is only capable of getting up to ~0.9 frames per second when applying deep learning for object detection with Python and OpenCV.
Is that fast enough?
Well, that depends on your application.
If you’re attempting to detect objects that are quickly moving through your field of view, likely
not.
But if you’re monitoring a low traffic environment with slower moving objects, the Raspberry Pi could indeed be fast enough.
In the remainder of today’s blog post we’ll be reviewing two methods to perform deep learning-based object detection on the Raspberry Pi.
Looking for the source code to this post?
Jump right to the downloads section.
Raspberry Pi: Deep learning object detection with OpenCV
Today’s blog post is broken down into two parts.
In the first part, we’ll benchmark the Raspberry Pi for real-time object detection using OpenCV and Python. This benchmark will come from the exact code we used for our laptop/desktop deep learning object detector from a few weeks ago.
I’ll then demonstrate how to use multiprocessing to create an alternate method to object detection using the Raspberry Pi. This method may or may not be useful for your particular application, but at the very least it will give you an idea on different methods to approach the problem.
Object detection and OpenCV benchmark on the Raspberry Pi
The code we’ll discuss in this section is is identical to our previous post on Real-time object detection with deep learning and OpenCV; therefore, I will not be reviewing the code exhaustively.
For a deep dive into the code, please see the original post.
Instead, we’ll simply be using this code to benchmark the Raspberry Pi for deep learning-based object detection.
To get started, open up a new file, name it
real_time_object_detection.py, and insert the following code:
# import the necessary packages from imutils.video import VideoStream from imutils.video import FPS import numpy as np import argparse import imutils import time import cv2
We then need to parse our command line arguments:
# construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-p", "--prototxt", required=True, help="path to Caffe 'deploy' prototxt file") ap.add_argument("-m", "--model", required=True, help="path to Caffe pre-trained model") ap.add_argument("-c", "--confidence", type=float, default=0.2, help="minimum probability to filter weak detections") args = vars(ap.parse_args())
Followed by performing some initializations:
# initialize the list of class labels MobileNet SSD was trained to # detect, then generate a set of bounding box colors for each class CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"] COLORS = np.random.uniform(0, 255, size=(len(CLASSES), 3)) # load our serialized model from disk print("[INFO] loading model...") net = cv2.dnn.readNetFromCaffe(args["prototxt"], args["model"])
We initialize
CLASSES, our class labels, and corresponding
COLORS, for on-frame text and bounding boxes (Lines 22-26), followed by loading the serialized neural network model (Line 30).
Next, we’ll initialize the video stream object and frames per second counter:
# initialize the video stream, allow the camera sensor to warm up, # and initialize the FPS counter print("[INFO] starting video stream...") vs = VideoStream(src=0).start() # vs = VideoStream(usePiCamera=True).start() time.sleep(2.0) fps = FPS().start()
Wwe initialize the video stream and allow the camera warm up for 2.0 seconds (Lines 35-37).
On Line 35 we initialize our
VideoStreamusing a USB camera If you are using the Raspberry Pi camera module you’ll want to comment out Line 35 and uncomment Line 36 (which will enable you to access the Raspberry Pi camera module via the
VideoStreamclass).
From there we start our
fpscounter on Line 38.
We are now ready to loop over frames from our input video stream:
# loop over the frames from the video stream while True: # grab the frame from the threaded video stream and resize it # to have a maximum width of 400 pixels frame = vs.read() frame = imutils.resize(frame, width=400) # grab the frame dimensions and convert it to a blob (h, w) = frame.shape[:2] blob = cv2.dnn.blobFromImage(cv2.resize(frame, (300, 300)), 0.007843, (300, 300), 127.5) # pass the blob through the network and obtain the detections and # predictions net.setInput(blob) detections = net.forward()
Lines 41-55 simply grab and resize a
frame, convert it to a
blob, and pass the
blobthrough the neural network, obtaining the
detectionsand bounding box predictions.
From there we need to loop over the
detectionsto see what objects were detected in the
frame:
# loop over the detections for i in np.arange(0, detections.shape[2]): # extract the confidence (i.e., probability) associated with # the prediction confidence = detections[0, 0, i, 2] # filter out weak detections by ensuring the `confidence` is # greater than the minimum confidence if confidence > args["confidence"]: # extract the index of the class label from the # `detections`, then compute the (x, y)-coordinates of # the bounding box for the object idx = int(detections[0, 0, i, 1]) box = detections[0, 0, i, 3:7] * np.array([w, h, w, h]) (startX, startY, endX, endY) = box.astype("int") # draw the prediction on the frame label = "{}: {:.2f}%".format(CLASSES[idx], confidence * 100) cv2.rectangle(frame, (startX, startY), (endX, endY), COLORS[idx], 2) y = startY - 15 if startY - 15 > 15 else startY + 15 cv2.putText(frame, label, (startX, y), cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2)
On Lines 58-80, we loop over our
detections. For each detection we examine the
confidenceand ensure the corresponding probability of the detection is above a predefined threshold. If it is, then we extract the class label and compute (x ,y) bounding box coordinates. These coordinates will enable us to draw a bounding box around the object in the image along with the associated class label.
From there we’ll finish out the loop and do some cleanup:
# show the output frame cv2.imshow("Frame", frame) key = cv2.waitKey(1) & 0xFF # if the `q` key was pressed, break from the loop if key == ord("q"): break # update the FPS counter fps.update() # stop the timer and display FPS information fps.stop() print("[INFO] elapsed time: {:.2f}".format(fps.elapsed())) print("[INFO] approx. FPS: {:.2f}".format(fps.fps())) # do a bit of cleanup cv2.destroyAllWindows() vs.stop()
Lines 82-91 close out the loop — we show each frame,
breakif ‘q’ key is pressed, and update our
fpscounter.
The final terminal message output and cleanup is handled on Lines 94-100.
Now that our brief explanation of
real_time_object_detection.pyis finished, let’s examine the results of this approach to obtain a baseline.
Go ahead and use the “Downloads” section of this post to download the source code and pre-trained models.
From there, execute the following command:
$ python real_time_object_detection.py \ --prototxt MobileNetSSD_deploy.prototxt.txt \ --model MobileNetSSD_deploy.caffemodel [INFO] loading model... [INFO] starting video stream... [INFO] elapsed time: 54.70 [INFO] approx. FPS: 0.90
As you can see from my results we are obtaining ~0.9 frames per second throughput using this method and the Raspberry Pi.
Compared to the 6-7 frames per second using our laptop/desktop we can see that the Raspberry Pi is substantially slower.
That’s not to say that the Raspberry Pi is unusable when applying deep learning object detection, but you need to set your expectations on what’s realistic (even when applying our OpenCV + Raspberry Pi optimizations).
Note: For what it’s worth, I could only obtain 0.49 FPS when NOT using our optimized OpenCV + Raspberry Pi install — that just goes to show you how much of a difference NEON and VFPV3 can make.
A different approach to object detection on the Raspberry Pi
Using the example from the previous section we see that calling
net.forward()is a blocking operation — the rest of the code in the
whileloop is not allowed to complete until
net.forward()returns the
detections.
So, what if
net.forwad()was not a blocking operation?
Would we able to obtain a faster frames per second throughput?
Well, that’s a loaded question.
No matter what, it will take approximately a little over a second for
net.forwad()to complete using the Raspberry Pi and this particular architecture — that cannot change.
But what we can do is create a separate process that is solely responsible for applying the deep learning object detector, thereby unblocking the main thread of execution and allow our
whileloop to continue.
Moving the predictions to separate process will give the illusion that our Raspberry Pi object detector is running faster than it actually is, when in reality the
net.forward()computation is still taking a little over one second.
The only problem here is that our output object detection predictions will lag behind what is currently being displayed on our screen. If you detecting fast-moving objects you may miss the detection entirely, or at the very least, the object will be out of the frame before you obtain your detections from the neural network.
Therefore, this approach should only be used for slow-moving objects where we can tolerate lag.
To see how this multiprocessing method works, open up a new file, name it
pi_object_detection.py, and insert the following code:
# import the necessary packages from imutils.video import VideoStream from imutils.video import FPS from multiprocessing import Process from multiprocessing import Queue import numpy as np import argparse import imutils import time import cv2
For the code walkthrough in this section, I’ll be pointing out and explaining the differences (there are quite a few) compared to our non-multprocessing method.
Our imports on Lines 2-10 are mostly the same, but notice the imports of
Processand
Queuefrom Python’s multiprocessing package.
Next, I’d like to draw your attention to a new function,
classify_frame:
def classify_frame(net, inputQueue, outputQueue): # keep looping while True: # check to see if there is a frame in our input queue if not inputQueue.empty(): # grab the frame from the input queue, resize it, and # construct a blob from it frame = inputQueue.get() frame = cv2.resize(frame, (300, 300)) blob = cv2.dnn.blobFromImage(frame, 0.007843, (300, 300), 127.5) # set the blob as input to our deep learning object # detector and obtain the detections net.setInput(blob) detections = net.forward() # write the detections to the output queue outputQueue.put(detections)
Our new
classify_framefunction is responsible for our multiprocessing — later on we’ll set it up to run in a child process.
The
classify_framefunction takes three parameters:
-
net
: the neural network object. -
inputQueue
: our FIFO (first in first out) queue of frames for object detection. -
outputQueue
: our FIFO queue of detections which will be processed in the main thread.
This child process will loop continuously until the parent exits and effectively terminates the child.
In the loop, if the
inputQueuecontains a
frame, we grab it, and then pre-process it and create a
blob(Lines 16-22), just as we have done in the previous script.
From there, we send the
blobthrough the neural network (Lines 26-27) and place the
detectionsin an
outputQueuefor processing by the parent.
Now let’s parse our command line arguments:
# construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-p", "--prototxt", required=True, help="path to Caffe 'deploy' prototxt file") ap.add_argument("-m", "--model", required=True, help="path to Caffe pre-trained model") ap.add_argument("-c", "--confidence", type=float, default=0.2, help="minimum probability to filter weak detections") args = vars(ap.parse_args())
There is no difference here — we are simply parsing the same command line arguments on Lines 33-40.
Next we initialize some variables just as in our previous script:
# initialize the list of class labels MobileNet SSD was trained to # detect, then generate a set of bounding box colors for each class CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"] COLORS = np.random.uniform(0, 255, size=(len(CLASSES), 3)) # load our serialized model from disk print("[INFO] loading model...") net = cv2.dnn.readNetFromCaffe(args["prototxt"], args["model"])
This code is the same — we initialize class labels, colors, and load our model.
Here’s where things get different:
# initialize the input queue (frames), output queue (detections), # and the list of actual detections returned by the child process inputQueue = Queue(maxsize=1) outputQueue = Queue(maxsize=1) detections = None
On Lines 56-58 we initialize an
inputQueueof frames, an
outputQueueof detections, and a
detectionslist.
Our
inputQueuewill be populated by the parent and processed by the child — it is the input to the child process. Our
outputQueuewill be populated by the child, and processed by the parent — it is output from the child process. Both of these queues trivially have a size of one as our neural network will only be applying object detections to one frame at a time.
Let’s initialize and start the child process:
# construct a child process *indepedent* from our main process of # execution print("[INFO] starting process...") p = Process(target=classify_frame, args=(net, inputQueue, outputQueue,)) p.daemon = True p.start()
It is very easy to construct a child process with Python’s multiprocessing module — simply specify the
targetfunction and
argsto the function as we have done on Lines 63 and 64.
Line 65 specifies that
pis a daemon process, and Line 66 kicks the process off.
From there we’ll see some more familiar code:
# initialize the video stream, allow the cammera sensor to warmup, # and initialize the FPS counter print("[INFO] starting video stream...") vs = VideoStream(src=0).start() # vs = VideoStream(usePiCamera=True).start() time.sleep(2.0) fps = FPS().start()
Don’t forget to change your video stream object to use the PiCamera if you desire by switching which line is commented (Lines 71 and 72).
Once our
vsobject and
fpscounters are initialized, we can loop over the video frames:
# loop over the frames from the video stream while True: # grab the frame from the threaded video stream, resize it, and # grab its dimensions frame = vs.read() frame = imutils.resize(frame, width=400) (fH, fW) = frame.shape[:2]
On Lines 80-82, we read a frame, resize it, and extract the width and height.
Next, we’ll work our our queues into the flow:
# if the input queue *is* empty, give the current frame to # classify if inputQueue.empty(): inputQueue.put(frame) # if the output queue *is not* empty, grab the detections if not outputQueue.empty(): detections = outputQueue.get()
First we check if the
inputQueueis empty — if it is empty, we put a frame in the
inputQueuefor processing by the child (Lines 86 and 87). Remember, the child process is running in an infinite loop, so it will be processing the
inputQueuein the background.
Then we check if the
outputQueueis not empty — if it is not empty (something is in it), we grab the
detectionsfor processing here in the parent (Lines 90 and 91). When we call
get()on the
outputQueue, the detections are returned and the
outputQueueis now momentarily empty.
If you are unfamiliar with Queues or if you want a refresher, see this documentation.
Let’s process our detections:
# check to see if our detectios are not None (and if so, we'll # draw the detections on the frame) if detections is not None: # loop over the detections for i in np.arange(0, detections.shape[2]): # extract the confidence (i.e., probability) associated # with the prediction confidence = detections[0, 0, i, 2] # filter out weak detections by ensuring the `confidence` # is greater than the minimum confidence if confidence < args["confidence"]: continue # otherwise, extract the index of the class label from # the `detections`, then compute the (x, y)-coordinates # of the bounding box for the object idx = int(detections[0, 0, i, 1]) dims = np.array([fW, fH, fW, fH]) box = detections[0, 0, i, 3:7] * dims (startX, startY, endX, endY) = box.astype("int") # draw the prediction on the frame label = "{}: {:.2f}%".format(CLASSES[idx], confidence * 100) cv2.rectangle(frame, (startX, startY), (endX, endY), COLORS[idx], 2) y = startY - 15 if startY - 15 > 15 else startY + 15 cv2.putText(frame, label, (startX, y), cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2)
If our
detectionslist is populated (it is not
None), we loop over the detections as we have done in the previous section’s code.
In the loop, we extract and check the
confidenceagainst the threshold (Lines 100-105), extract the class label index (Line 110), and draw a box and label on the frame (Lines 111-122).
From there in the while loop we’ll complete a few remaining steps, followed by printing some statistics to the terminal, and performing cleanup:
# show the output frame cv2.imshow("Frame", frame) key = cv2.waitKey(1) & 0xFF # if the `q` key was pressed, break from the loop if key == ord("q"): break # update the FPS counter fps.update() # stop the timer and display FPS information fps.stop() print("[INFO] elapsed time: {:.2f}".format(fps.elapsed())) print("[INFO] approx. FPS: {:.2f}".format(fps.fps())) # do a bit of cleanup cv2.destroyAllWindows() vs.stop()
In the remainder of the loop, we display the frame to the screen (Line 125) and capture a key press and check if it is the quit key at which point we break out of the loop (Lines 126-130). We also update our
fpscounter.
To finish out, we stop the
fpscounter, print our time/FPS statistics, and finally close windows and stop the video stream (Lines 136-142).
Now that we’re done walking through our new multiprocessing code, let’s compare the method to the single thread approach from the previous section.
Be sure to use the “Downloads” section of this blog post to download the source code + pre-trained MobileNet SSD neural network. From there, execute the following command:
$ python pi_object_detection.py \ --prototxt MobileNetSSD_deploy.prototxt.txt \ --model MobileNetSSD_deploy.caffemodel [INFO] loading model... [INFO] starting process... [INFO] starting video stream... [INFO] elapsed time: 48.55 [INFO] approx. FPS: 27.83
Here you can see that our
whileloop is capable of processing 27 frames per second. However, this throughput rate is an illusion — the neural network running in the background is still only capable of processing 0.9 frames per second.
Note: I also tested this code on the Raspberry Pi camera module and was able to obtain 60.92 frames per second over 35 elapsed seconds.
The difference here is that we can obtain real-time throughput by displaying each new input frame in real-time and then drawing any previous
detectionson the current frame.
Once we have a new set of
detectionswe then draw the new ones on the frame.
This process repeats until we exit the script. The downside is that we see substantial lag. There are clips in the above video where we can see that all objects have clearly left the field of view…
…however, our script still reports the objects as being present.
Therefore, you should consider only using this approach when:
- Objects are slow moving and the previous detections can be used as an approximation to the new location.
- Displaying the actual frames themselves in real-time is paramount to user experience.
Summary
In today’s blog post we examined using the Raspberry Pi for object detection using deep learning, OpenCV, and Python.
As our results demonstrated we were able to get up to 0.9 frames per second, which is not fast enough to constitute real-time detection. That said, given the limited processing power of the Pi, 0.9 frames per second is still reasonable for some applications.
We then wrapped up this blog post by examining an alternate method to deep learning object detection on the Raspberry Pi by using multiprocessing. Whether or not this second approach is suitable for you is again highly dependent on your application.
If your use case involves low traffic object detection where the objects are slow moving through the frame, then you can certainly consider using the Raspberry Pi for deep learning object detection. However, if you are developing an application that involves many objects that are fast moving, you should instead consider faster hardware.
Thanks for reading and enjoy!
And if you’re interested in studying deep learning in more depth, be sure to take a look at my new book, Deep Learning for Computer Vision with Python. Whether this is the first time you’ve worked with machine learning and neural networks or you’re already a seasoned deep learning practitioner, my new book is engineered from the ground up to help you reach expert status.
Just click here to start your journey to deep learning mastery.
Downloads:
The post Raspberry Pi: Deep learning object detection with OpenCV appeared first on PyImageSearch.
from PyImageSearch http://ift.tt/2gcE82M
via IFTTT
KRACK Demo: Critical Key Reinstallation Attack Against Widely-Used WPA2 Wi-Fi Protocol
from The Hacker News http://ift.tt/2igYBb1
via IFTTT
Sunday, October 15, 2017
Americans Anonymous by Barry Delaney
from Google Alert - anonymous http://ift.tt/2wXmBmf
via IFTTT
▶ Bears upset Ravens 27-24 in OT on Connor Barth's GW 40-yard FG (ESPN)
via IFTTT
VPN Unlimited
from Google Alert - anonymous http://ift.tt/2wXs9ND
via IFTTT
Anonymous user d8382c
from Google Alert - anonymous http://ift.tt/2yoHVCq
via IFTTT
coments for anonymous validations field
from Google Alert - anonymous http://ift.tt/2zc0mK6
via IFTTT
Ravens: WR Breshad Perriman (concussion) and TE Maxx Williams (ankle) won't return vs. Bears (ESPN)
via IFTTT
Addicts anonymous near me
from Google Alert - anonymous http://ift.tt/2glSh1n
via IFTTT
anonymous-requests 0.1
from Google Alert - anonymous http://ift.tt/2ykhXmi
via IFTTT
📋 Ravens: WR Jeremy Maclin among inactives against Bears (ESPN)
via IFTTT
Ravens: Jeremy Maclin (shoulder) will have playing status determined in pregame - Adam Schefter (ESPN)
via IFTTT
Surf anonymously online
from Google Alert - anonymous http://ift.tt/2ykJPnD
via IFTTT