Background
I'm using a cheap Chinese IP Camera that only has an RTSP stream to monitor my 3D printer via Octoprint/Octoapp, which requires a MJPEG stream over HTTP.
This approach uses lighttpd and cgi-bin as the server. Similar could be done using nginx + fastcgi, or in python -- perhaps even an Octoprint plugin. You might be able to get systemd start ffmpeg directly, but I had trouble getting the output stream to be sent to the socket, and wasn't sure how to stop the process when the socket was closed.
As it converts between video formats it requires a bit of CPU power on the host, though that can be tuned by configuring the resolution and framerate of the original RTSP stream, as well as resizing the output stream if required.
A 1080p stream at 10FPS on an Intel Core i5-8259U uses about 5-10% CPU
and consumes 256MB of RAM. This approach is not efficient, so if you
have two devices viewing the stream then the CPU load is doubled. Once
the web browser closes its connection, or the camera goes offline, the
ffmpeg process does seem to reliably stop.
A tool like ONVIF Device Manager may help figuring out the RTSP stream URL if you can't find it in the documentation for your camera (look at the bottom of the "live video" screen).
Environment
Apart from getting the paths right to suit your setup, nothing here should be too specific to Linux or a particular distribution. This was done using:
- Debian GNU/Linux 10 (buster)
- lighttpd/1.4.53 (ssl)
- ffmpeg version 4.1.6-1~deb10u1
- CPU: Intel Core i5-8259U
lighttpd
Install lighttpd:
apt-get install lighttpd
Enable cgi-bin:
cd /etc/lighttpd/conf-enabled/
ln -s ../conf-available/10-cgi.conf
Configure the cgi-bin folder, and set lighttpd to not buffer the entire file before starting to send:
Alter /etc/lighttpd/conf-available/10-cgi.conf:
server.modules += ( "mod_cgi" )
$HTTP["url"] =~ "^/cgi-bin/" {
server.stream-response-body = 2
cgi.assign = ( "" => "" )
alias.url += ( "/cgi-bin/" => "/var/www/cgi-bin/" )
}
Restart lighttpd to pick up the configuration change:
systemctl restart lighttpd
Create the cgi-bin folder:
mkdir /var/www/cgi-bin
Scripts
Create the scripts that'll generate a single frame and stream, adjust the IP address and RTSP stream URL to suit your particular camera:
Create /var/www/cgi-bin/webcamframe:
#!/bin/bash
echo "Content-Type: image/jpeg"
echo "Cache-Control: no-cache"
echo ""
ffmpeg -i "rtsp://192.168.60.13:554/user=admin&password=SECRET&channel=1&stream=0.sdp" -vframes 1 -f image2pipe -an -
Create /var/www/cgi-bin/webcamstream:
#!/bin/bash
echo "Content-Type: multipart/x-mixed-replace;boundary=ffmpeg"
echo "Cache-Control: no-cache"
echo ""
ffmpeg -i "rtsp://192.168.60.13:554/user=admin&password=SECRET&channel=1&stream=0.sdp" -c:v mjpeg -q:v 1 -f mpjpeg -an -
Make the two scripts executable (otherwise you'll get a 500 Internal Server Error)
chmod +x /var/www/cgi-bin/webcamframe
chmod +x /var/www/cgi-bin/webcamstream
Complete
Now you should be able to access these two URLs on your server:
http://192.168.60.10/cgi-bin/webcamstream
http://192.168.60.10/cgi-bin/webcamframe
Both of these take 1-2 seconds to start, which I
think is a combination of getting the RTSP stream going, and an inherent
delay in generating the MJPEG output. Once it is running, there is about a one second
delay on the video stream, which I gather is normal for ffmpeg generating MJPEG.
Troubleshooting
If you get distorted/smeared/artifacts on the output stream, try adding at the start of the ffmpeg arguments -rtsp_transport tcp to force RTSP over TCP. Apparently there may be a UDP buffer issue in ffmpeg and/or Linux that can cause frames to get truncated. Other options to try out are here.
You can troubleshoot the scripts on the command line like this, which will let you see the output of ffmpeg and the start of what is sent to the web client:
cd /var/www/cgi-bin
./webcamframe | hd | head
Sample output of it working correctly is shown below. The conversion and pipe errors are OK, they are just due to head stopping the output early.
Extra Reading
Also have a look a this post on how to take individual JPG frames an turn them into MJPEG.
11 comments:
Thanks for the post. I've created a container to perform this conversion: https://github.com/piersfinlayson/rtspmpeg
Thanks for the quick solution. I ended up having to make some extra scripts for my needs (due to specific browser issues) but this was a great head start. I posted my scripts here: https://github.com/guino/rtsp2jpeg and made sure to give you credit for your original work.
Man, you saved my day! Thanks for this write-up.
Thank you so much! I was able to integrate my Hikvision doorbell into my home automation with MJPEG this way.
Hey,
I also created a docker based on your article and based on the initial git project from "Piers" (https://github.com/piersfinlayson/rtspmpeg). My project (https://github.com/ckohrt/rtspmpeg) has updated the dependencies and it turned out that it was not working anymore due to chenges in new version of lighttpd. So I migrated from ubuntu base image to alpine and use now specific version in order that it still works in half a year.... It took me a while to get it going (I was a newbie with docker etc.)
But it works and I will improve it soon.
BR
Christian
Thank you so much, Steve!
I am running a website for our kids in the sailing club. We bought a Reolink 810A so the parents can watch their kids when training. Unfortunately the camera had only RTSP stream which made it next to impossible to integrate it into the website. I followed your instructions here and installed it on a Raspberry 4. Even though the processor is at roughly 80% with a single view, this is at least a great option. Thank you for that!
Talking about more than one viewers - could you think of an option where the stream is only converted once and then distributed to possibly more than one viewer? This would keep the processor load at 80% regardless how many access the stream. And 80% usage would be fine in my opinion.
Thanks anyways and best regards,
Marcel
For multiple consumers, have a look at ffmpeg and HLS (http live streaming). It does something like creating files on disk that a webserver can send to multiple clients, and a file that is essentially a playlist of the current file(s) to send. The client first grabs the playlist, then requests the video stream contained within.
I don't know much more about it other than that, as its just something I saw on my way to figuring out my solution. Hopefully it may be a useful starting point for you.
Thank you so much, I was looking for a way to use my Yi modded camera with Octoprint and this works like a charm
Hello.
This solution works well, but how can i add more streams?
Thanks for the great write up.
I'm using octoprint_deploy on an oDroid C4 to control multiple printers and rather than have all my usb ports being used for USB cameras I have set up multiple Tapo IP cameras to watch the printers.
In answer to someone's question on doing multiple streams you just have a file for each stream
i.e. I have the following files to point octoprint instances at;
webcamstream_E3v2Neo
webcamstream_AMegaS
webcamstream_PrusaMk4
webcamstream_GtA20M
Even better this is all running on a single SBC and not even coming close to bogging the system down.
Post a Comment