external scripts – How do I log in a user?

I want to log in Drupal 8 users from an external PHP file. I am trying the following code.

use DrupalCoreDrupalKernel;
use SymfonyComponentHttpFoundationRequest;
use DrupaluserUserInterface;
//$autoloader = include('/vendor/autoload.php');
$autoloader = require_once __DIR__ . '/vendor/autoload.php'; 
$kernel = new DrupalKernel('prod', $autoloader);

$request = Request::createFromGlobals();
$response = $kernel->handle($request);

// ID of the user.
// REPLACE WITH WHATEVER ID YOU WANT TO LOGIN AS;
$uid = 100; 
$user = DrupaluserEntityUser::load($uid);

// This is required to call user_login_finalize here.
$kernel->prepareLegacyRequest($request);
user_login_finalize($user);

$response->send();

$kernel->terminate($request, $response);

When I run it, I get the following error.

TypeError: Argument 1 passed to user_login_finalize() must implement interface DrupaluserUserInterface, null given, called in /var/www/html/stocksee/public_html/component/login.php on line 22 in user_login_finalize() (line 554 of core/modules/user/user.module).

Please help me.

linux – Extremely high incoming traffic on web server but no abnormalities in log files

Today we recorded extremely high incoming traffic (1 Gbps) on our Debian Webserver (green chart). On an average day it’s at a maximum of about 20-30 Mbps. Firewall as well as fail2ban are configured correctly and should be working fine.
blue chart means outgoing traffic, green chart means incoming traffic
We checked our log files and compared them to those of past days and we could not find any abnormalities. The high incoming traffic leads to a CPU usage of 100 percent and our web application won’t work anymore.

What could be the reasons for such a high incoming traffic? If it was a DDOS attack, why haven’t been there any suspicious traffic / IPs in the log files?

Log contents of text file before deleting

Script isn’t logging the contents of run.txt to log.txt

I’ve tried to remove the delete command to see if it was deleting it too quickly and therefore couldn’t log but that wasn’t the case
What needs changing?

@ECHO OFF &setlocal

SET File=run.txt

type %File%
for /f "tokens=*" %%A in (%File%) do (echo >> log.txt)
del "%File%" /s /f /q > nul

pause

Generate Change/Audit log for a specific sharepoint list

Three options:

Turn on Version History in the list settings. Then you can open each list items version history and see who changed what and when. This data is not easy to access for analysis or reporting, though, unless you are prepared to write code and access the REST API.

You can also create a Power Automate flow that runs whenever an item is modified in that list. You can use the workflow to create new items in a log list, but the workflow will not be able to determine which column(s) of the item were changed and which were not.

You can use a hack to export the version history to Excel, but it is clunky. See https://stackoverflow.com/questions/10561661/sharepoint-list-version-history-export-to-excel

My centOS 7 minimal font screwed up when trying to tail modsecurity audit log

My font display was completely normal. But right after i cat or tail /var/log/modsec_audit.log, my font becomes like this. Any solution?

Image link:
https://i.stack.imgur.com/XkIyj.png

syslog – rsyslog unexpected end of file on log files with OMFileZipLevel

OS: Centos7
rsyslog: 8.24.0

I have various hosts sending logs to my centralised rsyslog server. I use OMFileZipLevel option in my config file to compress my logs and then zcat anytime i wish to view the contents.

Since i have upgraded to rsyslog8, whenever i try to zcat one of my compressed logs i get the following error:

#   zcat srv1.example.com.log.gz
2021-01-06T04:46:11-08:00 srv1.example.com lab: test_msg

gzip: srv1.example.com.log.gz: unexpected end of file

if i stop rsyslog server and access the file then i dont get that error message.
Even if i start the server i can still see the log file without that EOF message, however when my rsyslog server receives a message and writes it to the file i start getting the same message:

#   zcat srv1.example.com.log.gz
2021-01-06T05:54:22-08:00 srv1.example.com lab: test_msg

gzip: srv1.example.com.log.gz: unexpected end of file


#   systemctl stop rsyslog

#   zcat  srv1.example.com.log.gz
2021-01-06T05:54:22-08:00 srv1.example.com lab: test_msg


#   systemctl start rsyslog

#   zcat srv1.example.com.log.gz
2021-01-06T05:54:22-08:00 srv1.example.com lab: test_msg


srv1:~$ logger -p local5.info test_msg2 @my_rsyslog_server

#   zcat srv1.example.com.log.gz
2021-01-06T03:32:09-08:00 srv1.example.com lab: hab_test
2021-01-06T05:55:27-08:00 srv1.example.com lab: test_msg2

gzip: srv1.example.com.log.gz: unexpected end of file

I was able to find a mailing list where someone mentions a similar issue and this has to do with the file still been opened by rsyslog.

Thing is that i have another rsyslog server running version 5.8.10 (Centos 6) with the exact same rsyslog configuration file but i dont have such behaviour with EOF messages on my compressed logs.

Could this be a bug in rsyslog 8.24.0 ?

How to see my location log in the Google Maps without a working network connection?

I would like to see my google maps position log even if I was offline, or I did not use my phone.

My impression is that the position logging works only if either the phone is active or if I am on wifi.

I would like if I would see my position log more often. I think, the best would be if some background task would continuously (or, maybe in every 10 minutes or so) log my position data, save it, and upload to the google maps if I am online.

Is it possible somehow? It is not a problem, if it would require some other map application, although the gmaps would be ideal.

This answer works only if I manually turn it on every time. I do not want to do that, I need an automatic solution.

plotting – Plot[Zeta[x], {x, 2, 20}, ScalingFunctions -> “Log”] is not logarithmic

The code

Plot(Zeta(x), {x, 2, 20}, ScalingFunctions -> "Log")

produces the following image, which is a plot of the Zeta function but the y-axis is not logarithmic.

Plot of Zeta-function

Replacing Zeta by a different function produces a logarithmic scaling on the y-axis.

The scaling function Log10 and Log2 also fail.
LogPlot also fails.

I tested the code in 12.2 and 11.3, both fail.

Is this a bug or am I missing something?
I’m hesitating to call this a bug (and report it to Wolfram), as this seems to be something which should have been encountered before.

audit – MySQL router making mysql general log not useful

I am running MySQL 5.7 on a 3 node cluster using InnodBCluster. It is making the user of the general_log quite challenging. The mysqlrouter check ins are being logged in the mysql general_log. These status updates are appearing in the log nearly every second for every router that is pointed at the cluster.

2808634 Query select I.mysql_server_uuid, I.endpoint, I.xendpoint, I.attributes from mysql_innodb_cluster_metadata.v2_instances I join mysql_innodb_cluster_metadata.v2_gr_clusters C on I.cluster_id = C.cluster_id where C.cluster_name = 'cluster' AND C.group_name = '7b6cc6c6-b4d6-11ea-bb3a-023d3853a38a'
2808634 Query COMMIT
2808634 Query show status like 'group_replication_primary_member'
2808634 Query SELECT member_id, member_host, member_port, member_state, @@group_replication_single_primary_mode FROM on_group_members WHERE channel_name = 'group_replication_applier'
    8 Query BEGIN
    8 Query COMMIT /* implicit, from Xid_log_event */
2808633 Quit  

I’m guessing this is how mysqlrouter is internally knowing where to route connections. Is there a to not write this to the general_log? Other than turning off the general_log?

I use the general_log to audit of who logs in and what they run. I have found there are tools to do this other than the general_log such as:

MySQL Enterprise Audit https://dev.mysql.com/doc/refman/5.7/en/audit-log.html

Percona Audit Log https://www.percona.com/doc/percona-server/5.7/management/audit_log_plugin.html

Which I may need to investigate further.

Using LDAP: how to log in with SSH, mounting the Samba home directory with autofs?

I have spent some time setting up LDAP-based authentication in my MacOS, iOS and Linux network, taking account of the special quirks of MacOS and Synology (my NAS). SSH login (SSH keys etc.) works and Samba share mounts work. It was all quite fiddly, and I now know more about LDAP than I ever anticipated.

However…

Having reached a point where I could (at least in theory) log into any machine in my network, I thought it would be nice for users to also have access to the same home directory everywhere. No problem: autofs, which can also be managed from LDAP! Or so I thought…

I’m trying something like the following to set up Samba home directories for autofs:

cn=&,ou=auto.home,cn=automount,cn=etc,dc=home,dc=arpa
cn: &
objectClass: automount
objectClass: top
automountInformation: -fstype=cifs,vers=3.0,domain=HOME,rw,username=&,uid=&,gid=& ://s-sy-00.local/home

Some background:

  1. s-sy-00.local is my Synology NAS where the home directories will live.
  2. /home is UNC of the home directory share that Synology serves up for the user defined in username=.

The problems start when I log in to a remote machine with SSH. autofs tries to mount the user’s home directory, but needs the user’s password. I can put the password into a password= parameter in the automountInformation line, or I can put the username and the password into a credentials file that I pass with the credentials= parameter. Both approaches lead to added complexity (an automount entry for each user) and duplication (same username and password in two different places: LDAP and the credentials file or the automount and the posixUser in LDAP).

Is there a standard way of dealing with this problem? My search engine skills have not turned anything up yet.

It seems to me that there are three possible solutions:

  1. the one that is obvious to every one else but not to me;
  2. using the SSH key to mount a credentials file per user (possibly dynamically generated from LDAP) from an SSHFS share;
  3. using Kerberos for a full-blown SSO.

I would prefer number 1 🙂 I have an aversion to Kerberos: it seems to be overkill and is certainly relatively complex.

Can anyone offer some words of wisdom to give me a flying start into the new year?