Displaying Camera Streams Side-by-Side
Often there are situations where it is desired to watch the streams from multiple cameras simultaneously. Security cameras are certainly one of those situations. For a while now I have been “playing” with various commercial cameras (both USB and WIFI), building, programming, and deploying Raspberry Pi cameras, Jetson TX1 and TX2 cameras, and the Esp32-Cam. As a result I developed my own program using the opencv library to display two streams concatenated side-by-side. The program concatenates two windows instead of moving the coordinates of two separate windows on the screen. This has the particular advantage when you need to save the camera streams perfectly synchronized in time with each other.
I present the C++ code for this in the following. Although the counterpart code in Python is very similar, I tend to choose native code since it operates faster with less resources.
Installing Version 3.1.0 Over an Existing Version
Operating System and Computer
Originally I developed a working program using Visual Studio 2015 on a Win7 machine on which I had earlier installed opencv. However, on all of my other devices I have Ubuntu 16.04 running on them, so I ported the code to those devices, in particular the Jetson TX1 and RPi3 Model B.
OpenCV Version 3.1.0
While not necessary, it is recommended that one builds this particular version because it has support for gstreamer . For my Jetson TX1 running Ubuntu 16.04 I followed exactly the instructions found here:
https://docs.opencv.org/3.4/d6/d15/tutorial_building_tegra_cuda.html
On my Raspberry Pi3s I run Ubuntu-Mate (16.04). So, you have to choose, as I did, one of the many sites that show in detail how to install and build the version of opencv you want for your particular operating system.
Installing Version 3.1.0 Over an Existing Version
When one installs and builds opencv, it is important that the python bindings are to the new version. Check this as follows after starting python:
>>> import cv2
>>> print cv2.__version__
3.1.0
If you get the existing version, then you have to find and remove the old .so library. The new binding library cv2.so is found in /usr/lib/python2.7/dist-packages/ . In the same directory is the existing version. In my case it was named cv2.aarch64-linux-gnu.so . This library either has to be removed or moved to some directory that will not be searched by python when it is instantiated.
Code
The code is built using examples found mostly on the OpenCV website. In the following I comment on snippets of code, then present the program in its entirety.
At the beginning #define USBONLY is used to choose between attached USB cameras or web cams streaming over your LAN. The url’s exemplify cameras port forwarded out to the internet so that this client software can be run from anywhere. Additionally, there is an option to save the combined stream to a file.
//#define USBONLY
//#define SAVE
#ifndef USBONLY
char *url = "http://admin:passwd1@yourLAN_IP:port1/videostream.cgi?mjpeg";
char *url2 = "http://admin:passwd2@yourLAN_IP:port2/videostream.cgi?mjpeg";
#endif
#ifdef USBONLY
int url = 0;
int url2 = 1;
#endif
Next in the main program the capture of two streams is initiated using VideoCapture() and the successful opening of each stream is checked:
VideoCapture cap(url); // open the video camera 1
if (!cap.open(url)){
cout << "ERROR: Cannot open the video file url" << endl;
return -1;
}
if (!cap.isOpened()) // if not success, exit program
{
cout << "ERROR: video file is not opened rul" << endl;
return -1;
}
sleep(1);
VideoCapture cap2(url2); // open the video camera 2
if (!cap2.open(url2)){
cout << "ERROR: Cannot open the video file url2" << endl;
return -1;
}
if (!cap2.isOpened()) // if not success, exit program
{
cout << "ERROR: video file is not opened url2" << endl;
return -1;
}
sleep(1);
Proceeding, the display window is created and positioned and the frame size of the cameras is obtained along with the frame rate. This information is then used to open an instance of VideoWriter(). Here it was chosen to save the stream as a .wmv file.
/*create and position the window*/
namedWindow("Concat", CV_WINDOW_AUTOSIZE);
moveWindow("Concat", 25, 100);
/*get properties*/
double dWidth = cap.get(CV_CAP_PROP_FRAME_WIDTH); //get the width of frames of the video
double dHeight = cap.get(CV_CAP_PROP_FRAME_HEIGHT); //get the height of frames of the video
cout << "Frame Size = " << dWidth << "x" << dHeight << endl;
double fps = cap.get(CV_CAP_PROP_FPS); //get the frames per seconds of the video
cout << "Frame per seconds : " << fps << endl;
Size frameSize(static_cast(2*dWidth), static_cast(dHeight));
#ifdef SAVE
VideoWriter oVideoWriter("MyVideo.wmv", CV_FOURCC('W', 'M', 'V', '2'), fps, frameSize, true); //initialize the VideoWriter object
if (!oVideoWriter.isOpened()) //if not initialize the VideoWriter successfully, exit the program
{
cout << "ERROR: Failed to write the video" << endl;
return -1;
}
#endif
Finally we have the while loop. It demonstrates two methods of capturing a frame, then concatenating the two frames and displaying the result, and optionally streaming the resulting concatenated frame to a file.
while (1)
{
Mat frame, frame2, frame3;
bool bSuccess;
/*read video camera 2 using this method*/
bSuccess = cap2.read(frame2);
if (!bSuccess) //if not success, break loop
{
cout << "ERROR: Cannot read frame from video file" << endl;
break;
}
/*read video camera 1 using an alternative method*/
bSuccess = cap.grab(); // read a new frame from video
if (!bSuccess) //if not success, break loop
{
cout << "ERROR: Cannot read frame from video file" << endl;
break;
}
/*now retrieve the three channels*/
bSuccess = cap.retrieve(frame, 0);
if (!bSuccess) //if not success, break loop
{
cout << "ERROR: Cannot retrieve channel 0" << endl;
}
bSuccess = cap.retrieve(frame, 1);
if (!bSuccess) //if not success, break loop
{
cout << "ERROR: Cannot retrieve channel 1" << endl;
}
bSuccess = cap.retrieve(frame, 2);
if (!bSuccess) //if not success, break loop
{
cout << "ERROR: Cannot retrieve channel 2" << endl;
}
/*the two frames are now concatenated side-by-side(horizontally) and shown*/
/*use vconcat if frames are to arranged vertically*/
hconcat(frame, frame2, frame3);
imshow("Concat", frame3);
#ifdef SAVE
oVideoWriter.write(frame3); //writer the frame into the file
#endif
if ((waitKey(30) & 0xFF) == 27) //wait for 'esc' key press for 30ms. If 'esc' key is pressed, break loop
{
cout << "esc key is pressed by user" << endl;
break;
}
}
One should not forget to use waitKey() as a method to exit the program. Keep in mind that waitKey() works only if imshow() is present. For example, if you comment out imshow() and only stream the result to a file you will have no way to exit the program except ctrl-Z.
twocams.cpp
Here is the complete code for twocams.cpp.
/*Capture video either from USB cameras or streamed from URLs and
concatenate the two videos either horizontally or vertically. Optionally
save the video to a file with format you specify.*/
#include "opencv2/opencv.hpp"
#include
#include
using namespace cv;
using namespace std;
//#define USBONLY
//#define SAVE
#ifndef USBONLY
char *url = "http://admin:passwd1@yourLAN_IP:port1/videostream.cgi?mjpeg";
char *url2 = "http://admin:passwd2@yourLAN_IP:port2/videostream.cgi?mjpeg";
#endif
#ifdef USBONLY
int url = 0;
int url2 = 1;
#endif
/*this function is not used*/
void fourCCStringFromCode(int code, char fourCC[5]) {
for (int i = 0; i < 4; i++) {
fourCC[3 - i] = code >> (i * 8);
}
fourCC[4] = '\0';
}
int main(int argc, char* argv[])
{
VideoCapture cap(url); // open the video camera 1
if (!cap.open(url)){
cout << "ERROR: Cannot open the video file url" << endl;
return -1;
}
if (!cap.isOpened()) // if not success, exit program
{
cout << "ERROR: video file is not opened rul" << endl;
return -1;
}
sleep(1);
VideoCapture cap2(url2); // open the video camera 2
if (!cap2.open(url2)){
cout << "ERROR: Cannot open the video file url2" << endl;
return -1;
}
if (!cap2.isOpened()) // if not success, exit program
{
cout << "ERROR: video file is not opened url2" << endl;
return -1;
}
sleep(1);
/*create and position the window*/
namedWindow("Concat", CV_WINDOW_AUTOSIZE);
moveWindow("Concat", 25, 100);
double dWidth = cap.get(CV_CAP_PROP_FRAME_WIDTH); //get the width of frames of the video
double dHeight = cap.get(CV_CAP_PROP_FRAME_HEIGHT); //get the height of frames of the video
cout << "Frame Size = " << dWidth << "x" << dHeight << endl;
double fps = cap.get(CV_CAP_PROP_FPS); //get the frames per seconds of the video
cout << "Frame per seconds : " << fps << endl;
Size frameSize(static_cast(2*dWidth), static_cast(dHeight));
#ifdef SAVE
VideoWriter oVideoWriter("MyVideo.wmv", CV_FOURCC('W', 'M', 'V', '2'), fps, frameSize, true); //initialize the VideoWriter object
if (!oVideoWriter.isOpened()) //if not initialize the VideoWriter successfully, exit the program
{
cout << "ERROR: Failed to write the video" << endl;
return -1;
}
#endif
/*Comment found on the web: "You're telling the writer that it should play at 30 frames per second. So if you're actually capturing, say, 15 frames
per second, those frames are going to be played back faster than than real time.
Showing the captured image, waiting for a keypress, and writing it to the file all take time. You need to account for that.
You might try capturing the video up-front, measuring the actual FPS while it happens, and then writing your AVI using that
value."
*/
while (1)
{
Mat frame, frame2, frame3;
bool bSuccess;
/*read video camera 2 using this method*/
bSuccess = cap2.read(frame2);
if (!bSuccess) //if not success, break loop
{
cout << "ERROR: Cannot read frame from video file" << endl;
break;
}
/*read video camera 1 using an alternative method*/
bSuccess = cap.grab(); // read a new frame from video
if (!bSuccess) //if not success, break loop
{
cout << "ERROR: Cannot read frame from video file" << endl;
break;
}
bSuccess = cap.retrieve(frame, 0);
if (!bSuccess) //if not success, break loop
{
cout << "ERROR: Cannot retrieve channel 0" << endl;
}
bSuccess = cap.retrieve(frame, 1);
if (!bSuccess) //if not success, break loop
{
cout << "ERROR: Cannot retrieve channel 1" << endl;
}
bSuccess = cap.retrieve(frame, 2);
if (!bSuccess) //if not success, break loop
{
cout << "ERROR: Cannot retrieve channel 2" << endl;
}
/*the two frames are now concatenated side-by-side(horizontally) and shown*/
/*use vconcat if frames are to arranged vertically*/
hconcat(frame, frame2, frame3);
imshow("Concat", frame3);
#ifdef SAVE
oVideoWriter.write(frame3); //writer the frame into the file
#endif
if ((waitKey(30) & 0xFF) == 27) //wait for 'esc' key press for 30ms. If 'esc' key is pressed, break loop
{
cout << "esc key is pressed by user" << endl;
break;
}
}
return 0;
}
Building the Code
Not to be forgotten is building the code. The build command was put into a shell script, build_twocam.sh.
#!/bin/bash
g++ -ggdb -o twocam twocams.cpp -I/usr/include/opencv -lopencv_shape -lopencv_stitching -lopencv_objdetect -lopencv_superres -lopencv_videostab -lopencv_calib3d -lopencv_features2d -lopencv_highgui -lopencv_videoio -lopencv_imgcodecs -lopencv_video -lopencv_photo -lopencv_ml -lopencv_imgproc -lopencv_flann -lopencv_core
Be sure to run on your machine the command below to get the correct configuration.
pkg-config --cflags --libs opencv
Last Advice
The following is some advice I found on the web. Indeed, the resulting frame rate that is streamed to a save file will be slower that the frame rate of the cameras because of all of the overhead involved in the while loop. Advice:
“You’re telling the writer that it should play at 25 frames per second. So if you’re actually capturing, say, 15 frames per second, those frames are going to be played back faster than than real time. Showing the captured image, waiting for a keypress, and writing it to the file all take time. You need to account for that. You might try capturing the video up-front, measuring the actual FPS while it happens, and then writing your AVI using that value.”