mount – Pendrive is not copying anything

I am using ubntu-18.04 lts.Recently there was some problem with my pendrive(it was showing very less memory though it contained no files!) so, I formatted it using gparted but now I followed that mount point is changed and also I can’t copy anything to pendrive .
my /etc/fstab file detail:

UUID=a15e2f3d-2f50-41c6-8e07-7bf861329f99 / ext4 errors=remount-ro 0 1

UUID=68747df0-f007-40b7-a5f1-89cb232c6337 /home ext4 defaults 0 2

UUID=995626c2-17db-41c3-a3c5-95aa6a732a4e none swap sw 0 0
Please help me out.

sharepoint online – copying files from library to another library using SPFx

Stack Exchange Network


Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Visit Stack Exchange

memory card – Very low speed of copying files from camera to ssd

Fujifilm X-T30 and Sandisk Extreme Pro 128GB SDXC Card 128GB 170MB/s V30 UHS-I U3

Average speed is 15.6 megabyte pro sekunds

I am using a usb 3.1 gen 1 cable plugged into a usb 3 port to transfer the photos. 15.6 mb/s no where near what is neither advertised for this sd card nor usb 3 speed. What am I doing wrong? Maybe an sd card reader would be faster but are these speeds normal?

lubuntu – How can I analize why rsync copies very slow after copying very fast a few files on ntfs drives?

I’m backing around 2Tb of data from a ntfs disk to another ntfs disk with rsync ( I tried also with midnight commander ) and the copy starts with a “good” 25Mb/s speed, but after copying a couple of gigabytes the speed drops down to around 5Mb/s sometimes even less.

If I stop the copy and start again rsync to continue the copy the speed starts again around 25Mb/s and then goes back to 5Mb/s.

This is the start of rsync, from here everything copies that slowly.

>f+++++++++ _University Courses/Harvard/Harvard - Advanced Algorithms 2016/11-Advanced Algorithms (COMPSCI 224), Lecture 11.mp4
    502,527,183 100%   25.29MB/s    0:00:18 (xfr#2, ir-chk=1021/53136)
>f+++++++++ _University Courses/Harvard/Harvard - Advanced Algorithms 2016/12-Advanced Algorithms (COMPSCI 224), Lecture 12.mp4
    494,046,164 100%   25.45MB/s    0:00:18 (xfr#3, ir-chk=1020/53136)
>f+++++++++ _University Courses/Harvard/Harvard - Advanced Algorithms 2016/13-Advanced Algorithms (COMPSCI 224), Lecture 13.mp4
    389,502,911 100%   25.77MB/s    0:00:14 (xfr#4, ir-chk=1019/53136)
>f+++++++++ _University Courses/Harvard/Harvard - Advanced Algorithms 2016/14-Advanced Algorithms (COMPSCI 224), Lecture 15.mp4
    401,384,534 100%   14.92MB/s    0:00:25 (xfr#5, ir-chk=1018/53136)
>f+++++++++ _University Courses/Harvard/Harvard - Advanced Algorithms 2016/15-Advanced Algorithms (COMPSCI 224), Lecture 16.mp4
    498,564,894 100%    4.94MB/s    0:01:36 (xfr#6, ir-chk=1017/53136)
>f+++++++++ _University Courses/Harvard/Harvard - Advanced Algorithms 2016/16-Advanced Algorithms (COMPSCI 224), Lecture 17.mp4
    417,205,204 100%    2.30MB/s    0:02:52 (xfr#7, ir-chk=1016/53136)
>f+++++++++ _University Courses/Harvard/Harvard - Advanced Algorithms 2016/17-Advanced Algorithms (COMPSCI 224), Lecture 18.mp4
    495,885,960 100%    6.16MB/s    0:01:16 (xfr#8, ir-chk=1015/53136)
>f+++++++++ _University Courses/Harvard/Harvard - Advanced Algorithms 2016/18-Advanced Algorithms (COMPSCI 224), Lecture 19.mp4
    475,335,986 100%    2.75MB/s    0:02:45 (xfr#9, ir-chk=1014/53136)
>f+++++++++ _University Courses/Harvard/Harvard - Advanced Algorithms 2016/19-Advanced Algorithms (COMPSCI 224), Lecture 20.mp4
    485,359,371 100%    1.40MB/s    0:05:29 (xfr#10, ir-chk=1013/53136)
>f+++++++++ _University Courses/Harvard/Harvard - Advanced Algorithms 2016/20-Advanced Algorithms (COMPSCI 224), Lecture 21.mp4
    505,021,448 100%    6.46MB/s    0:01:14 (xfr#11, ir-chk=1012/53136)
>f+++++++++ _University Courses/Harvard/Harvard - Advanced Algorithms 2016/21-Advanced Algorithms (COMPSCI 224), Lecture 22.mp4

The drives are not noticeable fragmented.

I tried mounting the drives with the big_writes and the async option but I’d seen no difference.

I tested the speed with hdparm with and without cache and the speed is faster than those 25Mb/s.

I monitored with iostat and the %rwqm is very high, but I have no idea if that’s bad or ok.

enter image description here

I’m trying to find the bottleneck or the problem but I haven’t been able to locate it, any help on how could I monitor and analyze the problem would be appreciated.

Update:
I tried disabling the disk write cache to check if the speed drop was happening after the drive filled the cache:

sudo hdparm -W0 /dev/sde

But the problem persist and the drive behaves the same.

bitcoin-qt “backup wallet” vs copying wallet.dat

I am new to bitcoin, I wander what are the differences, and pros/cons (if any) between the file generated by bitcoin-qt when clicking on “backup wallet” vs copying the file “wallet.dat”? I see these files are indeed different with “cmp” so I am assuming that there are differences between these files. Can one of these files recover all of the information in the other?

EDIT: after a few backups I noticed that if I close the wallet, and then wait a few minutes. A backup at this moment equals the wallet.dat file.

google sheets – Pasting values only instead of copying values using a script

I have a piece of code put together through various posts, it is used to move multiple rows into the “Billed” sheet. Unfortunately it’s missing a key feature, namely that I need the script to paste values instead of copying values (this is currently bringing any active formulas in the source cells) when the rows are moved to the other sheet.

I tried using the “contentsOnly: true” but that seems to be completely ignored. Reading through the documentation is not yielding any other leads for me.

Here is my code so far. Please let me know how I could change it to make it work as intended.

Any help would be appreciated!

function doneCopy() {
  var ss = SpreadsheetApp.getActiveSpreadsheet();
  var sheet = ss.getSheetByName("BT");
  var values = sheet.getRange(1, 1, sheet.getLastRow(), 1).getValues();
  var moveRows = values.reduce(function(ar, e, i) {
    if (e(0) == "Billed") ar.push(i + 1);
    return ar;
  }, ());
  var targetSheet = ss.getSheetByName("Billed");
  moveRows.forEach(function(e) {
    sheet.getRange(e, 1, 1, sheet.getLastColumn()).moveTo(targetSheet.getRange(targetSheet.getLastRow() + 1, 1));
  });
  moveRows.reverse().forEach(function(e) {sheet.deleteRow(e)});
}

advertising – Is there a way to protect against some other site copying your Facebook pixel tracking id and messing up your targeting?

A Facebook pixel helps to build a narrow / targeted custom audience focused on people who visited my website (this is really important).

The problem is that anyone can see my FB pixel ID on my webpage (it’s in the source!).

Now what if someone wants to ruin my custom audience by using this script with my own FB pixel ID on a crappy website that has lots of visitors?

<script>
!function(f,b,e,v,n,t,s){if(f.fbq)return;n=f.fbq=function(){n.callMethod?
n.callMethod.apply(n,arguments):n.queue.push(arguments)};if(!f._fbq)f._fbq=n;
n.push=n;n.loaded=!0;n.version='2.0';n.queue=();t=b.createElement(e);t.async=!0;
t.src=v;s=b.getElementsByTagName(e)(0);s.parentNode.insertBefore(t,s)}(window,
document,'script','https://connect.facebook.net/en_US/fbevents.js');
fbq('init', '<MYKEY>'); // Insert your pixel ID here.
fbq('track', 'PageView');
</script>

Then my custom audience (using my FB pixel ID) will be “polluted” by lots of visits to the crappy website.

Then any retargeting campaign will be focused on these random visitors (who didn’t visit my website but the crappy website), which would mean a low conversion rate, causing the loss of ad-spending.

Is there really no way to avoid this?

8 – Copying prod multi-site installation to dev multi-site installation

I am attempting to upgrade my Drupal 8.9.2 multi-site installation to D9. I was hoping to work this all out on a development installation, that is located in a subdirectory of my home directory on my NameCheap Shared Hosting plan.

When I first installed this installation, I did so using the tarball method from years ago. I then followed the instructions given here (https://www.drupal.org/docs/installing-drupal/add-composer-to-an-existing-site). I was able to get the current running prod version converted to use composer with out an issue.

Before I go and start messing with the files, and attempting to upgrade to D9 through composer on a prod installation, I wanted to copy these files over to another installation where I can use it as a sandbox. To do this I:

  • copied the files on my server to a new directory
  • made clone databases for the new installation
  • truncated all the cache tables in the cloned DB
  • changed all the database settings in the settings files to point to the cloned databases

I’ve done this before without any issues, and now I seem to be getting a WSOD error on all the sites running from the new installation. The error I’m getting is

PHP Fatal error:  Uncaught Error: Class 'DrupalCoreCacheDatabaseBackend' not found in ~/{new installation dir}/public_html/index.php:16.

If anyone has any thoughts what’s going on I am all ears.

Thank you in advance

macos – AppleTv is busy: Copying cache files from device Xcode will continue when AppleTv is finished

I have been trying to pair appleTv to my MAC through Xcode.
I have tried with three different networks. One of which is completely dedicated to my project. In other words, I am the only one using it.

I have tried for hours on end. Reset the appleTv, closed and reopened both the Mac and the appleTv, tried three different networks and the result is always the same.

AppleTv is busy: Copying cache files from device

Xcode will continue when AppleTv is finished.

I have no idea what to try next. I have wasted countless hours doing something that should be straightforward and I am totally unable to make any progress in my project.

Any help is welcome.


NOTE: In the days before, whenever the pairing works, it stops working if I leave the AppleTv untouched for a long period of time(hours). It appears as “Disconnected”. I have no idea why this happens.

jinja2 – Ansible – copying and editing a remote file at the same time?

In an Ansible role, I’m looking for a way to copy a remote file to a different location, while also replacing the first two lines of the file.

I’m open to other approaches, but my solution involves using slurp to retrieve the file, somehow convert it into a list of individual lines, and then write it back using a template.

I’m stuck at the step of splitting the string returned by slurp into lines.

Ansible 2.9, both the controller and the remote hosts are running RHEL 7.8.

The input file is already on the remote host as /etc/sample/inputfile.txt

Line1 something
Line2 something
Line3 stays untouched
Line4 stays untouched

Desired output in /etc/sample/outputfile.txt

Line1 has changed
Line2 has also changed
Line3 stays untouched
Line4 stays untouched

The effect I want to replicate is what the following sequence would produce, but idempotently without making three changes to outputfile.txt on every run.

- copy:
    src: /etc/sample/inputfile.txt
    dest: /etc/sample/outputfile.txt
    remote_src: True

- lineinfile:
    path: /etc/sample/outputfile.txt
    regexp: '^Line1'
    line: 'Line1 has changed'

- lineinfile:
    path: /etc/sample/outputfile.txt
    regexp: '^Line2'
    line: 'Line2 has also changed'

To make this idempotent, my current idea is to use slurp to retrieve inputfile.txt, then (somehow) strip the first two lines of that variable, and create outputfile.txt using a template.

- slurp:
    src: /etc/sample/inputfile.txt
  register: r_input
- set_fact:
    intermediate_var: '{{ r_input.content | b64decode }}'
- ??? How would I convert intermediate_var into a list of lines?
- template:
  vars:
    line1: 'Line1 has changed'
    line2: 'Line2 has also changed'
    body:  '??? intermediate_var without the first two lines'

However, I haven’t figured out a way to strip the first two lines off the slurped variable. I’m aware of the lookup plugin “lines” but have not figured out how to apply it in this scenario.