Azcopy files – copy latest files only

How to use Azcopy to copy the latest files only (specify currentdate()-1 or something like that) between two Azure storage accounts

How to get Document Thumbnail via rest api (sharepoint online – classic) for excel,word and pdf files

I’m looking at getting a thumbnail image of documents when uploaded into a document library.I can see the thumbnail preview when i click on (…) context menu of the document.
Is there a way to somehow get this document using rest api?

I did try the below for a png file

https://tenant.sharepoint.com/sites/test_layouts/15/getpreview.ashx?path=https%3A%2F%2Ftenant.sharepoint.com%2Fsites%2Ftest%2FTestWork.png 

and it worked but not for word or excel docs

Thanks in advance

python – PyPDF2: merging PDF files

I sometimes need to merge some big PDF files. Looking around I decided to use PyPDF2. All files are in the same directory as the script. The code works as intended and was tested with 5 small PDF files.

I found many variations and decided to write this one myself:

import glob
from PyPDF2 import PdfFileMerger


files = sorted(glob.glob('./*.pdf'))
merger = PdfFileMerger()
filename = input("Enter merged file name: ")

for file in files:
    merger.append(file)
    print(f"Processed file {file}")

merger.write(f'{filename}.pdf')
merger.close()

All files have titles such as 1.pdf 2.pdf.

The writing to file bit seems sloppy to me but I don’t know how to or even if to improve it. Did I insure that the files list will always be in alphabetical order? Are there best practice things I missed? Any other feedback is welcome as well.

The code only needs to run on windows. Cross-platform is not a concern for me. I don’t anticipate memory constraints as the system it runs on has a lot of RAM. This will only run client side using Python 3.8.

Wrote to one disk of raid array, files truncated, trying to recover data, arch linux

I’m trying to recover some data from a corrupt RAID1 pair and wanted some advice before I make matters worse.

This is how I caused the problem: I wanted to save some data prior to reinstalling my OS. I found an old computer with two magnetic disks in, both of which I inserted into the main computer and typed lsblk, which said something like, IIRC:

sda           8:0    0   1.8T  0 disk 
├─sda1        8:1    0  59.6G  0 part 
└─sda2        8:2    0   1.8T  0 part 
  └─md127     ???    ?   1.8T  ? ????
sdb           8:0    0   1.8T  0 disk 
├─sdb1        8:1    0  59.6G  0 part 
└─sdb2        8:2    0   1.8T  0 part 
nvme0n1     259:0    0 931.5G  0 disk 
├─nvme0n1p1 259:1    0   300M  0 part 
├─nvme0n1p2 259:2    0 896.8G  0 part /
└─nvme0n1p3 259:3    0  34.4G  0 part (SWAP)
  

where the nvm is the main disk. I can’t remember with certainty which of the two disks had the md127 on, but I think it was sda. mdadm was not installed at this time. (At least, not to my knowledge – it was a Manjaro box.) I found I could mount md127 and saved my precious data to it in a few tarballs. Then I wiped the main disk and reinstalled. At some point I installed mdadm in the new OS and mounted the RAID1 pair. I then found that the tarballs I’d saved were truncated.

How should I proceed to recover the rest of the tarballs?

I wonder if a simple fsck would do the trick, but I’m nervous to bang around at random.

ddrescue wouldn’t be much help cos I don’t have another 2TB disk in the house, and I’m not sure which, if either, of the magnetic disks still have useful data on.

photo editing – Why does GIMP increase the size of exported JPEG files?

I’m using GIMP to edit some vacation pictures (JPEG files) taken with a rather old digital camera. Lacking a professional background, I just went about by trial and error and ended up adjusting the following parameters:

  • Images are way too dark —> Colors > Levels > Input Levels > Increase Clamp Input, decrease High Input
  • Colors could be more vibrant —> Colors > Auto > Color Enhance

When exporting the edited files, I noticed that the file size has increased by a factor of 3, i.e. files less than 3 MB in size are now 10 MB or larger. What is causing this and is there any way to prevent this without trading away image quality?

ipad – iPadOS: Creating files on SMB share

A new issue has cropped up for me—possibly since iPadOS 14 was released (last week as of this writing), but I’m not sure it hasn’t been around longer since it’s been some weeks since I last tried doing this and it worked. The issue is specifically with creating files on an SMB share.

My iPad’s Files sidebar has an SMB share (boringly called Shared and located on the Linux machine at ~/Shared) exported by Samba from an Ubuntu Linux workstation on my home network, the same network my iPad’s on.

It works for fine for editing files—meaning, if I open a file from an app supporting Files selection, or use the Files app itself to invoke an app using a long-press on a file in Shared, I can modify it, and those changes propagate to my Linux machine’s storage with no problem.

It also works for deleting files using the Files app, by long-holding on the file in Shared and selecting Delete Now.

The problem arises only when I try to create a new file. If I copy a file into it by what I think of as the “classic Windows method”—finding its original location, long-pressing to Copy, navigating to Shared and doing Paste, I get:
A cropped screenshot showing an error panel reading, “The operation couldn’t be completed. Operation canceled // Operation canceled // OK”

which is an error panel reading,

The operation couldn’t be completed. Operation canceled
Operation canceled
OK

If I try to save a file into Shared directly from an app’s export sheet, I get the same:
A fullscreen screenshot from the Export panel of Vectornator showing an error panel reading, “The operation couldn’t be completed. Operation canceled // Operation canceled // OK”

I have tried restarting the Linux machine and ejecting and remounting the SMB share on the iPad.

One additional possible clue: I thought perhaps doing touch ~/Shared/two-boxes.svg on Linux prior to attempting the creation might work, since editing existing files works. I did this and then tried to paste the two-boxes.svg file, and got the box warning me of overwriting the empty file:

A “Replace Existing Items?” dialog

which reads,

Replace Existing Items
The file “two-boxes” already exists in this location. Do you want to replace it with the one you’re copying?
Replace
Keep Both
Stop

Interestingly, if I select Replace, I get the same “Operation canceled” error, but then no two-boxes.svg file at all is left behind. On the other hand, if I select Keep Both, I get no error, but also nothing happens—the old file remains and no new file is created.

One final possible clue: if I go into Shared in the Files app and try to create a new directory, the Files app itself crashes immediately (I’m sent to the home screen and when I restart Files, it reinitializes its state). But oddly, even though it crashes, it worked: the directory is created and visible on Linux!

dresden files – What skill is used to recognise someone?

Consider for example a character hears a voice recording of someone, such as a recording that a journalist made of an interview. Later the character coincidentally meets someone who spoke in that recording.

What should they be called to roll to see if they can recognise the character by the voice they heard, or to see if they don’t pick up on it?

I’m uncertain which of the skills in the official mechanics of the system would be the one that’s supposed to be used. The mental skills of Discipline or Conviction don’t seem appropriate as described. But I don’t think the usual “knowing people” skill of Contacts is either.

permissions – Grant access to multiple individual files in sharepoint library

I have 500 xlsx files in SharePoint site library. Each file belongs to a different specific member in the organization. I can grant Modify permission to each file individually, but maybe there is a command similar to icacls, so I can repeat this command in a loop and grant access to each file very fast?

postgresql – What’s with these empty log files? Why does csvlog mode create plaintext ones?

I’ve been fighting for days now to get logging set up. I’ve had to write a ton of code manually because PG doesn’t provide any automated mechanism to do this, for some reason, nor even tells you anything, beyond this.

I have:

  1. Set up the postgres_log table exactly like it says on that page.
  2. Set up my postgresql.conf like this (also as it says on the page, except it only describes it vaguely and lets me find out everything on my own):
log_destination = 'csvlog'
logging_collector = on
log_directory = 'C:\pglogs' # Yes, I require double  chars or else it removes them entirely...
log_filename = 'PG_%Y-%m-%d_%H;%M;%S'
log_rotation_age = 1min
log_rotation_size = 0
log_truncate_on_rotation = on
  1. Coded my own mechanism to constantly go through C:pglogs for any .csv file, skipping any ones that PG reports are already in use with pg_current_logfile, feed them into PG’s table and then delete the file. This took me a huge amount of time and effort and not a word about it was mentioned in that “manual”.

Questions:

  1. PostgreSQL creates both PG_2020-09-20_00;56;19.csv (in CSV format) and PG_2020-09-20_00;56;19 (in plaintext format) files. I obviously don’t want the extensionless files. Why are they created?
  2. Every minute (as specified) PG creates new log files, even if there’s nothing new to log. This results in an endless stream of empty log files (which my custom script goes through, “imports” and then deletes). How do I tell PG to stop doing that? It seems like pointless wear & tear on my disk to make empty files which are just deleted seconds later by my ever-running script.
  3. Why isn’t all of this automated? Why do I have to spend so much time to manually cobble together a solution to import the CSV files back into PG? In fact, why are they dumped to CSV files in the first place? Why doesn’t PG have the ability to directly log into that database table? It seems like a pointless exercise to dump CSV files which are only going to be COPYed back into the database and then deleted.

postgresql – What’s with these pointless empty log files? And why does the csvlog mode create plaintext ones too?

I’ve been fighting for days now just to get god damn logging set up. I’ve had to write a ton of code manually because PG doesn’t provide any automated mechanism to do this, for some reason, nor even tells you anything, beyond this.

I have:

  1. Set up the postgres_log table exactly like it says on that page.
  2. Set up my postgresql.conf like this (also as it says on the page, except it only describes it vaguely and lets me find out everything on my own):
log_destination = 'csvlog'
logging_collector = on
log_directory = 'C:\pglogs' # Yes, I requires double  chars or else it removes them entirely...
log_filename = 'PG_%Y-%m-%d_%H;%M;%S'
log_rotation_age = 1min
log_rotation_size = 0
log_truncate_on_rotation = on
  1. Coded my own mechanism to constantly go through C:pglogs for any .csv file, skipping any ones that PG reports are already in use with pg_current_logfile, feed them into PG’s table and then delete the file. This took me a huge amount of time and effort and not a word about it was mentioned in that “manual”.

Questions:

  1. PostgreSQL creates both PG_2020-09-20_00;56;19.csv (in CSV format) and PG_2020-09-20_00;56;19 (in plaintext format) files. I obviously don’t want the extensionless files. Why are they created?
  2. Every minute (as specified) PG creates new log files, even if there’s nothing new to log. This results in an endless stream of empty log files (which my custom script goes through, “imports” and then deletes). How do I tell PG to stop doing that? It seems like pointless wear & tear on my disk to make empty files which are just deleted seconds later by my ever-running script.
  3. Why isn’t all of this automated? Why do I have to spend so much time to manually cobble together a solution to import the CSV files back into PG? In fact, why are they dumped to CSV files in the first place? Why doesn’t PG have the ability to directly log into that database table? It seems like a pointless exercise to dump CSV files which are only going to be COPYed back into the database and then deleted.