Transmission: transmit multiple RTMP IP cameras with OBS and ffmpeg

I created a transmission server using Nginx and the RMTP module on a VPS running CentOS 7. I am trying to transmit multiple IP cameras using OBS to transmit the transmission. My question is how to create multiple m3u8 stream files for each camera, using different applications in the nginx.conf file. I tried using several instances of OBS to achieve it, but I'm running out of CPU. What I found is that using ffmpeg there is a way to transmit multiple transmissions, but I don't know the commands. My nginx.conf is the following:

    # RTMP configuration
    rtmp {


        server {
            listen 1935; # Listen on standard RTMP port
            chunk_size 4000;

    # Define the Application
            application camera1 {
                live on;
                exec ffmpeg -i rtmp://123.123.123.123/folder/$name -vcodec libx264 -vprofile
                baseline -x264opts keyint=40 -acodec aac -strict -2 -f flv rtmp://123.123.123.123/hls/$name;



                # Turn on HLS
                hls on;
                hls_path /mnt/hls/camera1;
                hls_fragment 3s;
                hls_playlist_length 60s;
                # disable consuming the stream from nginx as rtmp
                # deny play all;
            }


            application camera2 {
                live on;
                exec ffmpeg -i rtmp://123.123.123.123./$app/$name -vcodec libx264 -vprofile
                baseline -x264opts keyint=40 -acodec aac -strict -2 -f flv rtmp://123.123.1231.23/hls/$name;
                # Turn on HLS
                hls on;
                hls_path /mnt/hls/camera2;
                hls_fragment 3s;
                hls_playlist_length 60s;
                # disable consuming the stream from nginx as rtmp
                # deny play all;
            }
http {
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    server_tokens off;
    aio on;
    directio 512;
    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;


    server {
        listen 80;
        server_name  123.123.123.123;

        location / {

        root /var/www/html;
        index  index.html index.htm;
        }

        location /hls {
            # Disable cache
            add_header 'Cache-Control' 'no-cache';

            # CORS setup
            add_header 'Access-Control-Allow-Origin' '*' always;
            add_header 'Access-Control-Expose-Headers' 'Content-Length';

            # allow CORS preflight requests
            if ($request_method = 'OPTIONS') {
                add_header 'Access-Control-Allow-Origin' '*';
                add_header 'Access-Control-Max-Age' 1728000;
                add_header 'Content-Type' 'text/plain charset=UTF-8';
                add_header 'Content-Length' 0;
                return 204;
            }

            types {
                application/dash+xml mpd;
                application/vnd.apple.mpegurl m3u8;
                video/mp2t ts;
            }

            root /mnt/;
        }
    }
}

Using 2 instances of OBS with a different transmission name I could transmit 2 cameras simultaneously, but I want to transmit more than 50 cameras and with this method it is impossible. I think it could be done with ffmpeg. The format of RTSP transmissions is rtsp: // username: password @ hostname: port, but I want some help with the commands. Any help would be appreciated. Thanks in advance.

Use 2 cameras to get the common area between them in the middle

I am configuring 2 cameras with a LIDAR. I want to get the distance and the image. Then the cameras are equidistant from the lidar. How can I use your 2 images to do it? Please help

c # – How to switch between multiple cameras in a Unity scene, Silent Hill style

I have 3 cameras in a practice scene among which I want to change. Currently, I have a meshless cube that acts as a trigger with the following script

Public class CameraSwitch: MonoBehavior
{
(SerializeField)
Private GameObject startCamera;
(SerializeField)
GameObject2 private camera;

// Start is called before the first frame update

private void OnTriggerEnter(Collider other)
{
    if(other.tag == "Player")
    {
        startCamera.SetActive(false);
        camera2.SetActive(true);
    }
}

}

Switch from the first camera to the second without problems, but it won't change again if I move my player through the trigger again.

Is there a way to store several different cameras inside an object and simply disable all cameras besides the one being used so you don't have to keep shooting?

Any advice is appreciated.

film cameras – Minolta keeps restarting

I have a Minolta Riva Zoom 70.

My camera restarts only if I don't use it for a few days, you can hear the noise of & # 39; winding & # 39; and then the amount of photos I took returns to 1.

I'm not sure why this happens, I don't play the drums or pick up the back where the movie is.

I just wonder if this will have any problem with my movie when I develop it or not worry.

Thank you!

gps – Cameras with Android operating systems

Can anybody help me please? I need to find a camera or information about cameras with built-in GPS, touch screen, Wifi and Android 6.0 or higher (preferably Oreo). I am a property inspector and have used the Samsung Galaxy 2 for the past 6 years. I can no longer use it because the application I have to use to take photos (InspectorAde) now requires at least an Android 6.0 or higher operating system. They recommended me to use a new phone. I tried and it's horrible. I need a designated camera similar to the one I had, not a phone. I can't find a list anywhere that includes camera operating systems. Most cameras do not even list their operating systems in their specifications. Please help.

Comparison of cameras for astrophotography and portrait [closed]

I want to buy a camera that is good for both astrophotography and portrait photography, what are the specifications I should compare?

Technique: Why increasing the number of cameras in drones improves resolution?

Pegasus drone and in general they use normal cameras but surprisingly they have the same effect as the high magnification telescope.

I know that using multiple monitors improves the screen's pixel count, which is reasonable, but somehow the similar technique works for drones.

What is the relationship between the shutter speed of Polaroid cameras and their shutter button?

I am a little confused about the shutter speed on some Polaroid cameras.

Precisely, I have these models:

  • Polaroid boost (old)
  • Polaroid One Step 2 (new)

I often read that Polaroid cameras have a certain range of shutter speeds. So my first question en: how do you decide the shutter speed?

the second questionn is about its relationship with the pressure of the shutter button. I read here that it is possible to obtain longer exposures if I press the button to open the film slot after pressing the button to take the photo (for old cameras), and this operation can be done with the new One Step 2 camera rotating out of the camera while holding down the shutter button.

But if it's true, it means I decide the shutter speed and not the camera. So I don't understand how it works.

Me third question It is related to the second: how can I make long exposures with my polaroid cameras?

Nikon: Is the exposure of my cameras incorrect every time I use the flash?

When using the flash in a non-automatic mode (aperture priority or speed), the camera assumes that the flash is used for filling and calculates the exposure for a photo without flash. So, if the subject is dark, this will increase the exposure and then a large flash will be added. Then, when using the flash, go to the full manual or use the "Program" mode of your camera.

raw – How do cameras convert colorful noise into colorless?

Modern phones have amazing data processing from their cameras. For example, the raw image following a sculpture in a park after sunset (demonstrated with the IGV method in RawTherapee and developed in the "Neutral" profile) has a colorful noise, which is completely colorless in the JPEG shot of the same scene with the same parameters (shots made with RAW + JPEG mode in OpenCamera). The photos were taken with Samsung Galaxy A320F / DS with f / 1.9 aperture, 1 / 16s shutter and ISO 2149. I have also seen similar processing on the Raspberry Pi V2.1 camera (Sony IMX219), and many of the modern DSLRs . It seems that it also lacks any colorful noise in its JPEG files, even having colorless noise.

I wonder, what algorithms are used to achieve such conversion of colorful to colorless noise? Is it a demonstration of high optimized ISO? Or is it a special noise elimination algorithm applied after the demonstration? Or something else?

Raw:

raw shooting demosaiced

JPEG:

corresponding JPEG photo