windows 10 – Bad blocks on two external hard drives with vital data

I have two Western Digital My Passport hard drives (one 2TB and the other 4TB) and the 2TB hard drive was corrupted a few days ago when I accidentally pulled it out of a Macbook just before it was safely ejected. Even though this has happened plenty of times before, this one caused a bad block to form in a subfolder I had recently (but not lastly) used. It may also have been caused by a drop a few days before to the floor from about 2 feet and landing partly on carpet and wood. I discovered the bad blocks on a Windows PC after chkdsk and the Windows Disk Check Tool GUI both continued to fail with chkdsk giving 766f6c756d652e63 errors. Event Viewer logged repeatedly during the chkdsk’s there were bad blocks and with another cmd I was able to find the specific folder it was on. While the drive is usable and I can open nearly every file, performance is extremely slow and I cannot delete the bad block subfolder or any folders containing it.

Now I have an even worse problem. I started manually copying new files (completely apart from the bad block directory) to another hard drive, and after some success, I couldn’t copy one file because the copy speed slowed to a crawl. Now there are several folders that have videos that cannot be opened and I suspect they have bad blocks as well now. I didn’t know bad blocks could be transferred! How could that not be a serious warning? I’m currently running chkdsk /f /r on the 4TB hard drive and it’s looping during verifying file allocations at 100 percent. I have all of my data on the 2TB drive backed up to my desktop with the exception of my videos, which have been backed up with the 4TB drive. If I lose both of these drives my entire collection is kaput. Can I still save most of my files if I know properly how to back them up?

networking – Is it a bad security to trust 127.0.0.1 (localhost) in ssh connections?

I have a script that permits me to forward ssh connections from a port on my machine to a remote host so that I can access a database that is only accessible from that host, the script is as follows:

ssh -o "StrictHostKeyChecking no" -o ExitOnForwardFailure=yes -f -N -L <port>:<database_url>:<database_port> <user>@<remote_host_id> -i  <private_key>;
mysql <db_name>  -u <username>  -h 127.0.0.1 ;

I use this same script with multiple remote hosts/ databases, and I want to use the same port always,so I put the -o "StrictHostKeyChecking no" option cause without it a message “IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!” appears.

Is this thing secure? I think this basically means trusting 127.0.0.1 which most likely won’t be spoofed.

Azure DevOps agents in scale set have bad network connection

I deployed private devops agents to azure scale set and put it behind a load balancer, so now they speak with the outside world using 1 static IP. To spin up a new instance I am using my custom VM Image.

Now everything is almost good – just most of the npm install (or yarn install) timeouts… with

info There appears to be trouble with your network connection. Retrying…

If I use that same image to start a standalone VM and try running the same commands manually – all working fine.

What can I try to improve the performance?

pathfinder 1e – Is letting a player use a Large or larger race a bad idea from the game balance point of view?

I’ve played in games with it allowed; it’s not that big a deal.

Being Large is a considerable advantage for warriors, because reach is so potent. If it is available, anyone going in for melee combat is going to be very, very interested in that race. Many other races will simply not be able to realistically contribute as much for many types of warrior.

But that’s not really all that different from how things were to begin with. There are almost no races in Pathfinder with as much to offer as humans for, well, most everything. That includes warrior-ing. A bonus feat is a huge deal for almost everyone; only fighters gain so many bonus feats that the human bonus feat looks lackluster. And there are other rather-strong races to consider, such as strix. Strix are often banned precisely because flight is that good and so many people want it. Humans, of course, are almost-never banned.

So your Large race is going to be joining the ranks of those races that really stand out as being among the best of the best. On some level, all of your fighters are going to be this race, or strix, or they’re going to simply be worse off than they could be. Your non-fighter warrior-types might consider human another option. There may be a few others, but the point is that a lot of races are just going to be worse. They already are, but adding a new option may highlight that fact in uncomfortable ways.

In the end, though, your Large warrior is still, quite simply, not as powerful as a spellcaster, so there are distinct limits on how far one can go in claiming that this would be “overpowered.”

postgresql – Postgres, trigger function, is it bad for performance?

I have added the following trigger function to one of my tables in my Postgres database.

CREATE OR REPLACE FUNCTION trigger_set_timestamp()
RETURNS TRIGGER AS $$
BEGIN
  NEW.updated_at = NOW();
  RETURN NEW;
END;
$$ LANGUAGE plpgsql;

I am new to this, so, I am wondering, is this going to slow down my database? Should I avoid trigger functions for performance reasons and if so, what’s the alternative?

(In this case this is the only way I knew how to make the updated_at date column always set the date when any of the columns in the table would change….)

postgresql – Is it considered good or bad practice to use default values for parameters in stored procedures/functions?

TL;DR VERSION

Is defining default values for parameters in a PostgreSQL function considered good practice, or is it something that is generally frowned upon?

Related follow-up question: If/when using function default values, is it a good idea to define default values at both the table and the function definition level, or should I just do one or the other?


DETAILED VERSION

I’ve been the “DBA” – and I use that term very loosely – for my company for several years now (not to mention, the “Lead Software Developer/Programmer/Engineer”, “Lead Help Desk Technician”, etc.), but I know that I’ve barely scratched the surface of what it means to be a full-fledged DBA. Just this week, I learned something that I probably should’ve known about for a while, but had simply not encountered it before or had any reason to really think about it. My revelation came in the form of default parameter values in function in PostgreSQL.

We’re currently using PostgreSQL v12.1 for our production environment, and I have a significant number of tables, views, and functions with which our in-house software interacts on a regular basis. A majority of my functions were originally defined with named parameters, although they weren’t really being used as such because they were built back before PostgreSQL could “properly” handle named parameters in the body of the function. As such, I have a large number of functions that look something like this:

CREATE FUNCTION "someinsertfunction"(column1value character varying, column2value character varying, 
                                     column3value boolean, column4value date, column5value numeric,
                                     column6value numeric, column7value date, column8value integer,
                                     OUT newid integer) RETURNS integer
    SECURITY DEFINER
    LANGUAGE "plpgsql"
AS
$$
BEGIN
    INSERT INTO public.sometable
    (
        field1,
        field2,
        field3,
        field4,
        field5,
        field6,
        field7,
        field8
    )
    VALUES
    (
        $1,
        $2,
        $3,
        $4,
        $5,
        $6,
        $7,
        $8
    )
    RETURNING sometableid INTO newid;
END
$$;

I’m working on replacing the numeric placeholders ($1, $2, $3, etc.) with their corresponding parameter names if only to make my job a little easier when it comes to making changes to these functions down the road (I’m working on redesigning a lot of the database, so the less “confusion” I have, the better). However, as I was working through a few of these functions, I ran across a design feature that I had previously overlooked – default values for parameters. Instead of the above, I can define the function with these defaults as below to assist myself when it comes to my application programming:

CREATE FUNCTION "someinsertfunction"(column1value character varying DEFAULT NULL::character varying,
                                     column2value character varying DEFAULT NULL::character varying, 
                                     columnvalue boolean DEFAULT FALSE,
                                     column4value date DEFAULT NULL::date,
                                     column5value numeric DEFAULT 0::numeric,
                                     column6value numeric DEFAULT NULL::numeric,
                                     column7value DEFAULT NOW()::date,
                                     column8value integer DEFAULT NULL::integer,
                                     OUT newid integer) RETURNS integer

The help comes in the form of, if I forget to provide a parameter/value in my code for a field, the function itself will attempt to insert this default value. My application code – at least, for all of my current/new development projects – is built to assign names to each parameter as their being added, so I can pass any, all, or none of the functions’ parameters through my object construction before actually calling/executing the function itself.

Another case where this would be helpful is if I’ve changed the table structure (adding a column) and/or the function definition at the database level but simply haven’t had time to push the code changes out to the users (happens all too often). Without defining a default value for the parameters, adding or removing a parameter to/from a function that’s already being called by my application will cause the application to fail with an error that the function does not exist.

Of course, if the table definition prohibits a particular value that’s defined as a default in the function – for example, column_a is defined in the table as NOT NULL, but the default value defined in the function is NULL – the function will still fail (as it should), but at least I can give myself a little break when it comes to distributing certain programming changes that call on those functions.

I’ve considered some possible “dangers” in adding these default values, but the benefits seem to vastly outweigh the detriments, IMO. Is there something else I should be taking into account before I go ALTER FUNCTION crazy?

network – Any way to check if my AirPort card is going bad MacPro 2013?

Every day for the past couple days, my WiFi on my MacPro 2013 running Mojave 10.14.6 has spontaneously vanished. The signal strength icon in the menu bar goes gray, and the status is “Wi-Fi looking for networks” (even though it lists all of my local networks).

The only solution is to reboot the machine.

Beside simply this random crass behavior, is there perhaps something in the logs I could look at that would show some tell tales that this is, indeed, what’s happening? I would like perchance some additional evidence before I drag it in to the Genius Bar with “yea is sometimes does this” to try and narrow down the issue.

In the end, they may simply swap out the card. I don’t know if it will fail a diagnostic, or if it’s just happy happy until it decides “No, not happy”.

magento2 – Magento 2.3.5 – Multisite – Bad path for image / CSS / JS

For the sake of clarity, I will try to describe my configuration as precisely as possible.

  • Magento 2.3.5-p1
  • NGINX + FPM – PHP 7.3
  • Varnish -> Full page cache
  • Redis -> Session Cache
  • CDN -> CloudFront with Same distribution for each domain
  • Theme -> Porto Child Theme for each domain

two domain names :

  • domain1.com => Main domain through which I also access the administration
  • domain2.com => Secondary domain created in a second step

These two domains point to the same server (type A redirection), and Nginx takes care of sending each domain to its website.

In the back office, two websites have been created, each with its own “Store” and “Store Views”.

In store -> settings -> configuration -> General -> Web -> Base URL

AND

In store -> settings -> configuration -> General -> Web -> Base URL (Secure)

The links have been well provided.

All redirections are working properly.
When I access domain1.com or domain2.com, I have the right site appearing.

Now come the problem :

On domain1.com => NO PROBLEM.

On domain2.com => Path of images / CSS / JS, not working properly.

Instead of having :

domain2.com/media/logo/.../my_logo.png

I have :

domain1.com/media/logo/websites/3/my_logo.png

Result : Don’t working due to CSP restrictions. I could easily solve the problem by putting domain1.com on my white list.

But this is not a suitable solution for me.
Indeed, I would like to say:

  • That the right domain is used, namely domain2.com instead of domain1.com.
  • Even better, let it go through the CDN like on domain1.com !

On the other hand, it should be noted that some files pass well on the CDN.

Really not knowing where to look for this problem, I turn to you to try to find a solution.

I thank you in advance for all your comments, help, and suggestions that can help me solve this problem.

What Are Your Bad Habits While Trading Forex? – General Forex Questions & Help

A forex demo account permits the trader to test their trading plan for productivity, drawdowns, and other execution measures. It additionally permits a trader to assess the brokerage firm contribution the account without the responsibility of genuine funds. Most online brokerages will let you open a demo account with no commitment by giving an insignificant measure of individual data. These training accounts frequently have a limit in terms of how much virtual funding is given, just as a period limit after which the demo account will expire and require the trader to create another one or change to a live account.

network – How bad is rsync with no-password sudo?

I need to backup files with preserved attributes from a source workstation to a LAN server (both on Linux Mint, the server is running sshd and Samba). One of the solutions which preserves files’ source attributes is to run rsync over ssh, something like that on the client side:

rsync -a --rsync-path="sudo rsync" -e ssh /media/user1/source user2@server:/media/user2/destination/

However for this to work as expected, rsync needs to be added to the sudoer list as NOPASSWD on the server side:

user2 ALL=NOPASSWD:/usr/bin/rsync

This setup makes backing up with attribute preservation work fine. But how secure it is to have a passwordless rsync on the server? Is it inviting problems? Or I’m thinking too much? Our main security concern is unauthorised copying of sensitive data by a motivated hacker. Clearly if you can sudo rsync you can send any file from the server to an arbitrary internet location.

What are your thoughts? If it’s that bad, any suggestions on a LAN backup which would preserve attributes from the source on the LAN workstation?