continuous integration – How to address a common problem with automated testing on remote machine

CI/CD options such as bitbucket pipelines and github actions make use of virtual machines called runners. Changes in source code trigger the runners, which in turn trigger a set of commands. The commands may include instructions to run a test suite. The test suite checks whether source code changes break existing functionality.

Contributors often develop tests on their local machines. Tests that succeed on one platform do not necessarily succeed on another platform. This means that tests running on a remote machine can (and often do) behave unpredictably as compared with how they behave under local machine conditions. For example, a test that relies on Windows-style line returns may fail if it runs under a Linux operating system.

There are a few ways to circumvent this problem:

  • Make test assertions platform-independent. In the above example, this might mean replacing every newline character with an empty string.
  • Replicate the runner conditions on the local machine using containerization.
  • Accept failing tests in the local machine

Only the second option seems viable, however I have seen very little to no support for this in the documentation of most CI/CD tools. This leads me to believe that I might be missing something. For the professional software developers/testers out there, has anyone run into this problem? What is the correct way to address it?

Remote access server that’s running through VPN

I’ve have a server running Ubuntu 20.04.2 LTS that I access through SSH and that is working fine. I’m trying to have the server run through a VPN, so I can change what location it will display when logging into different websites. I’ve used option B at this site: https://protonvpn.com/support/linux-vpn-setup/ and that is working as well. The problem I have is that when I activate the VPN the IP obviously change, so I can’t log in to the server via SSH anymore. How can I solve this problem?

windows server 2012 r2 – Restore-DfsrPreservedFiles on a remote machine

I’m trying to restore preexisting files on a DFS share on Server2012r2. No matter what I do, I get this response. This occurs even if I use -RestoreToPath or if I try to run it with the local admin account.

PS E:> Restore-DfsrPreservedFiles -Path "E:shareDfsrPrivatePreExistingManifest.xml" -RestoreToOrigin -Force

Restore-DfsrPreservedFiles : Access to path E:shareDfsrPrivatePreExistingusernameData aplikacĂ­MicrosoftInternet ExplorerQuick LaunchUser PinnedTaskBar was denied.

The issue seems to be the fact, that the administrator doesn’t have the permissions to access some AppData stuff. However the DFS service moved it to PreservedFiles without an issue, so there has got to be some way of doing this.

I need for Restore-DfsrPreservedFiles to work in a sudo rsync -a kind of way.

Or at least make it move anything it can access while ignoring errors.

Can i access remote SQL server via VPN without firewall?

I’m totally new in SQL Server.But I’m going to work hard on it in this periode. There are two remote SQL server: Server A and Server B. I want to access data of Server B from Server A. Server B does not have firewall and its is a simple router/modem for particular. Can i access Server B via VPN without firewall?
Normally, in order to connect remote SQL server,default port 1433 and TCP/UDP should be enable. But without firewall, is the connection still possible/available? Any tutorial about this topic will be thankful!
If my question does not formulated correctly, please let me know.
I appreciate your time and help.

development – The remote server returned an error (401) unauthorized

I have install a fresh version of SharePoint 2019 which is on domain A,then i added a second computer to A which will be hosting websites on IIS.

I have created a console application just for testing connection to SharePoint on the second computer using the code below and am getting unauthorized access.Is there anything wrong with the code below?. Note the credentials are correct.

        System.Net.NetworkCredential cred = new System.Net.NetworkCredential(@"DoaminNAmeadministrator","Password");

        using (ClientContext clientContext = new ClientContext("http://10.1.1.1/sharepoint2019/sites/test/"))
        {
            clientContext.AuthenticationMode = ClientAuthenticationMode.Default;

            clientContext.Credentials = cred;

            Microsoft.SharePoint.Client.ClientResult<System.IO.Stream> stream = null;

            KeywordQuery keywordQuery = new KeywordQuery(clientContext);

            keywordQuery.QueryText = "SharePoint";

            keywordQuery.EnablePhonetic = true;

            keywordQuery.EnableOrderingHitHighlightedProperty = true;
            //keywordQuery.SummaryLength = 500;


            SearchExecutor searchExecutor = new SearchExecutor(clientContext);

            ClientResult<ResultTableCollection> results = searchExecutor.ExecuteQuery(keywordQuery);

            clientContext.ExecuteQuery();

linux networking – Exclude remote syslog client logs from /var/log/syslog on host machine

Background:

I have a lil Raspberry Pi server running the latest Raspbian OS running a number of network appliances to help manage a complex IOT LAN for a client.

I have been using rsyslog to write logs from the network hardware and servers to an external drive, mounted to /media/syslog. This is working fine. No, I can’t write them to /var/log, because I’m generating hundreds of megabytes of logfiles per day, and I need to archive them uncompressed. Again, this is working flawlessly.

The Problem:

Every single event that is written to the logs in /media/syslog is also written to /var/log/syslog.

I really cannot overstate how incredibly annoying this is, especially since the volume of logs from the client devices is so enormous that even extremely generous logrotate settings mean I’ve got about 24 hours of syslog history on the server, maximum. By the time a problem is noticed and reported to me (within a day or two, usually), the logs have fully rotated out.

How to I prevent those remote clients’ logs from ending up in /var/log/syslog?

I’ve seen a bunch of posts saying I need to do something with *.*;auth,authpriv.none -/var/log/syslog but I haven’t the foggiest idea how to mess with syslog facilities or what it would even look like for this particular situation, so if you’re about to tell me to something along those lines, I’m gonna need you to explain in excruciating detail exactly what to cut and paste where.

Attached are my settings for rsyslog.conf…

# /etc/rsyslog.conf configuration file for rsyslog
#
# For more information install rsyslog-doc and see
# /usr/share/doc/rsyslog-doc/html/configuration/index.html


#################
#### MODULES ####
#################

module(load="imuxsock") # provides support for local system logging
module(load="imklog")   # provides kernel logging support
#module(load="immark")  # provides --MARK-- message capability

# provides UDP syslog reception
module(load="imudp")
input(type="imudp" port="4565")

# provides TCP syslog reception
module(load="imtcp")
input(type="imtcp" port="4565")


###########################
#### GLOBAL DIRECTIVES ####
###########################

#
# Use traditional timestamp format.
# To enable high precision timestamps, comment out the following line.
#
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat

#
# Set the default permissions for all log files.
#
$FileOwner root
$FileGroup adm
$FileCreateMode 0640
$DirCreateMode 0755
$Umask 0022

#
# Where to place spool and state files
#
$WorkDirectory /var/spool/rsyslog

#
# Include all config files in /etc/rsyslog.d/
#
$IncludeConfig /etc/rsyslog.d/*.conf


###############
#### RULES ####
###############

#
# First some standard log files.  Log by facility.
#
auth,authpriv.*         /var/log/auth.log
*.*;auth,authpriv.none      -/var/log/syslog
#cron.*             /var/log/cron.log
daemon.*            -/var/log/daemon.log
kern.*              -/var/log/kern.log
lpr.*               -/var/log/lpr.log
mail.*              -/var/log/mail.log
user.*              -/var/log/user.log

#
# Logging for the mail system.  Split it up so that
# it is easy to write scripts to parse these files.
#
mail.info           -/var/log/mail.info
mail.warn           -/var/log/mail.warn
mail.err            /var/log/mail.err

#
# Some "catch-all" log files.
#
*.=debug;
    auth,authpriv.none;
    news.none;mail.none -/var/log/debug
*.=info;*.=notice;*.=warn;
    auth,authpriv.none;
    cron,daemon.none;
    mail,news.none      -/var/log/messages

#
# Emergencies are sent to everybody logged in.
#
*.emerg             :omusrmsg:*

…and rsyslog.d/00-remotes.conf

$template NetworkLog1, "/media/syslog/%FROMHOST-IP%.log"
:fromhost-ip, isequal, "192.168.2.1" -?NetworkLog1
:fromhost-ip, isequal, "192.168.2.124" -?NetworkLog1
:fromhost-ip, isequal, "192.168.2.160" -?NetworkLog1
& stop

Can i make all traffic from a remote router flow through my network?

My sister is moving out soon.
But I want to make sure she will always be able to connect to the local nas and rdp’s.
I dont want her to have to connect via VPN all the time.
Is there any way to do this (or is it a really bad practise to do so?).

automount – NFS mount works, but autofs fails with remote IO error

Here is the problem:

showmount -e tsm2.montanavendor.com

Export list for tsm2.montanavendor.com:
/export/nim/lpp_source/720303_1913 devsap01_bkup.montanavendor.com
/export/nim/spot/720303_1913/720303_1913_spot/usr devsap01_bkup.montanavendor.com
/export/nim/scripts/devsap01_bkup.script devsap01_bkup.montanavendor.com
/nim 172.23.4.18,172.23.4.27,172.23.4.79,192.168.0.202
/export/nim/spot/720400_1937/720400_1937_spot/usr prdsap01_bkup.montanavendor.com
/export/nim/lpp_source/720400_1937 prdsap01_bkup.montanavendor.com
/adminBkup 172.23.4.18,172.23.4.27,unificntl_bkup.montanavendor.com
/tmp1 172.23.4.27,172.23.4.18,unificntl_bkup.montanavendor.com


sudo mount -t nfs -o nfsvers=3 tsm2.montanavendor.com:/tmp1 /nfs/tmp1

works and df yields:

tsm2.montanavendor.com:/tmp1 4096 257 3840 7% /nfs/tmp1


After umount of /nfs/tmp1 I try the automounter with this:

$ cat /etc/auto.master

/nfs /etc/auto.nfs

/mnt /etc/auto.mnt

and

$ cat /etc/auto.nfs

adminBkup -rw tsm2.montanavendor.com:/adminBkup

tmp1 -rw tsm2.montanavendor.com:/tmp1

then

sudo automount -f -v

Starting automounter version 5.1.6, master map /etc/auto.master

using kernel protocol version 5.05

mounted indirect on /nfs with timeout 300, freq 75 seconds

mounted indirect on /mnt with timeout 300, freq 75 seconds

which yields this after an access attempt:

attempting to mount entry /nfs/tmp1

mount.nfs: Remote I/O error

mount(nfs): nfs: mount failure tsm2.montanavendor.com:/tmp1 on /nfs/tmp1

failed to mount /nfs/tmp1


this from another session:

ls -l /nfs/tmp1

ls: cannot access ‘/nfs/tmp1’: No such file or directory


This looks like an ownership or permissions problem. I’ve set 777 on the server directory and re exported. I’ve tried deleting the /nfs/tmp1 directory before starting automount; no joy.

I think I am missing something obvious, but can’t see it.

the NFS server is AIX v7.2 and the client is Ubuntu Server v20.10

I can make this a static mount, but I prefer the automounter, which is immune to server re-boots.

cognito – Pros vs Cons of Secure Remote Password

We are setting up an authentication system using Cognito and Amplify. We noticed that Amplify suggests Secure Remote Password as the default.

I can understand the benefits of SRP for protecting against man-in-the-middle and such attacks. But it seems there is a downside too: for example, the server is unable to perform strength checks or to call Have I Been Pwned to check if the password has been compromised. By choosing SRP, it seems like we are opening ourselves to more of our users choosing “Password123!” as their password.

I haven’t been able to find much discussion on whether SRP is really a good choice or not. Does anyone know of standards or best practices I can refer to?

Sporadic IPSec L2TP remote access VPN failures for some users

I have set up an IPSec L2TP remote access VPN server on an Ubiquiti edge router. Clients connect using the native MacOS client.

Some users have problems. Their VPN occasionally disconnects. When they try to reconnect, the connection often fails over multiple attempts. At some time later, they are able to connect again.

Other users have no problems. Their session only disconnect when there is no activity (idle timeout).

The users that have failures tend to be using one specific Internet service provider, but it is not 100% consistent.

The connection failures look as if both sides think that the connection was lost. Thus I am thinking that it is a network issue somewhere between the client and server.

Is there any way to make IPSec L2TP more reliable / network error tolerant?

I am basically limited to config via the Ubiquiti CLI and the MacOS VPN client. But if all else fails, I could edit the underlying config files.

The router is configured based on this tutorial from Ubiquiti. The MacOS client config is based on this tutorial.