linux – How to retry wget if specific error is found?

I have been trying to download a very large folder (>1Tib) with multiple files with wget, but it keeps giving me “segmentation fault” error.

usr/local/bin/wget -r -c -N -l inf –no-iri --user=USER --password="PASS" --no-check-certificate ftps://ftp.lab.org/GetFolder

Is there a way to capture that particular error (segmentation fault) and if found, run the above command again as with while loop until all files are downloaded in GetFolder?

terminal – bash fork retry no child processes cpanel

sorry for my English language

i working on a node.js app test project on cpanel

i used cpanel terminal and i used nodemon index.js command then i got this message error:

jailshell: fork: retry: No child processes
jailshell: fork: retry: No child processes
jailshell: fork: retry: No child processes
jailshell: fork: retry: No child processes
jailshell: fork: retry: No child processes

how can stop this error?

c# – Replace specific exception as part of try/catch retry mechanism

Looking for a most efficient way to replace some specific exception and to leave others exceptions treatment the same the same.

try
{
  // some business logic
}
catch(UniqueConstraintViolationException uex) when (!uex.IsFatal() && !ShouldRetry(descriptor, command, uex, ref retryResult))
{
 throw new ItemAlreadyExistsException("Item already exists", uex);
}
catch (Exception ex) when (!ex.IsFatal() && ShouldRetry(descriptor, command, ex, ref retryResult))
{                        
}

service – Nginx retry if backend it temporary down

I’ve got a websocket server running behind a nginx reverse proxy.
But when I sometimes need to restart the node.js service and the clients directly get a connect() failed (111: Connection refused) while connecting to upstream

Is there a way to let nginx wait for 5-10 seconds and retry before giving up the connection?

 upstream websocket {
        server 127.0.0.1:3000 max_fails=3 fail_timeout=30s;
    }
server {

        server_name .......;

        location / {

            proxy_hide_header 'access-control-allow-origin';
            add_header 'access-control-allow-origin' '$http_origin' always;
            proxy_pass https://websocket;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection $connection_upgrade;
            proxy_set_header Host $host;
        }
        ssl_certificate /etc/letsencrypt/live/....../fullchain.pem; # managed by Certbot
        ssl_certificate_key /etc/letsencrypt/live/....../privkey.pem; # managed by Certbot
        include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
        ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

        listen 443 ssl; # managed by Certbot
    }

How to configure nginx to retry connections to its backend

I have nginx with nodejs back-end.

From time to time the back-end blocks for less than 1 sec. If I immediately do new request I receive correct page.

this is from the log:

2020/06/22 08:11:34 (error) 61280#61280: *437325287 connect() failed (111: Connection refused) while connecting to upstream, client: MYIP, server: MY.SERVER, request: "GET /api/admin/ HTTP/1.1", upstream: "http://127.0.0.1:2000/api/admin/", host: "MYHOST", referrer: "https://MYREFERER/admin"

nginx conf look like this:

upstream backend {
    server 127.0.0.0:2000;
}

I understand what the error means.

Is there a way nginx to retry the connection if connection is refused?

retry on reverify existing backlinks

i had a load of links removed after reverifying & when i checked the urls the links were still there, is it possible to have gsa ser retry to verify a few times because the urls were probably just having hosting issies or whatever, i have looked under
project>options at the top & cant see anything.
the tick box “remove after 1st verification try” is greyed out & cant be ticked (not what i wanted but cant be ticked anyway)
theres an option “dont remove urls” which can be ticked, but i want it to recheck the urls, & then remove the links if the link is definitely not there,

c# – Reactive .NET retry policy

I haven’t found a good code example for a reactive retry operator with a retry strategy that was good enough for my needs. So I tried to make my own. It seems to work in my tests but I would really appreciate some feedback. (C# 8.0)

using System;
using System.Linq;
using System.Reactive;
using System.Reactive.Concurrency;
using System.Reactive.Linq;

namespace Fncy.Helpers
{
    public static class ObeservableHelpers
    {
        public delegate TimeSpan? NextRetryDelay(int previousRetryCount, TimeSpan elapsedTime, Exception retryReason);

        public static IObservable<TSource> RetryAfterDelay<TSource>(
            this IObservable<TSource> source,
            NextRetryDelay retryDelay,
            IScheduler scheduler = null)
        {
            if (retryDelay == null) throw new ArgumentNullException(nameof(retryDelay));
            scheduler ??= Scheduler.Default;
            return source.RetryWhen(errors =>
            {
                var retryStart = scheduler.Now;
                int previousRetryCount = 0;
                return errors.SelectMany(ex =>
                {
                    var delay = retryDelay(previousRetryCount++, scheduler.Now - retryStart, ex);
                    return delay == null
                        ? Observable.Throw<Unit>(ex)
                        : Observable.Return(Unit.Default).Delay(delay.Value, scheduler);
                });
            });
        }
    }
}

What is the default retry behavior if we do not configure any retry templates in my consumer settings?

I am using spring boot 2.1.7.RELEASE and spring-kafka 2.2.8.RELEASE.And I am using the @KafkaListener annotation to create a consumer and I am using all the default settings for the consumer.

Now my question is:

  1. If I do not configure any retry template, will the consumer have a default retry behavior or not?

How do apt-get retry in hash sum not match?

I receive a hash sum that does NOT match much. This ends up breaking the automated compilation of my dockerfiles, so I have to keep checking to see if the hash sum did not occur.

Is there any way to tell apt-get to try the hash sum mismatch again?

Is there any way to use Spring retry with status to resume where you left off at the beginning of the application?

I have a task like sending an email that I want to ensure is sent fault tolerant after saying that a user was created. Can I use Spring Retry & # 39; s RetryContextCache to persist in some way in the task of sending the email, and if the application crashes before the email is sent, select the Spring Retry action from the cache and try again when the application starts again.

I use email sending as an example, but I suppose sending emails is securely idempotent. I would like this to be generic enough to "schedule" any type of task or service call similar to the outbox pattern used in microservice architectures, but use Spring Retry or perhaps Quartz as a backup store.