broadcast – Compress transaction hex string

I realized transaction hex can be very long and you need it for broadcasting a tx using blockstream.info API:

https://blockstream.info/testnet/api/tx

https://github.com/Blockstream/esplora/blob/master/API.md

If the user wants to send this hex using text message which allows 160 characters in one message, what should be the best way to solve this problem? I tried researching about way to compress it, send to a number which forwards to the web server running a PHP code, the hex string is decompressed and sent to blockstream.info for broadcasting tx. Base64 encoding after gzcompress() couldn’t reduce the number of characters to less than 160.

Example:

Hex:

02000000000101e939fb23e9991ebbc75fd08c736da32ca12d98a4ff1b8e970e97f5661927ee410100000000fdffffff02b0a90a000000000016001421e2f997b3bd36e273eaca365da8515a389444ae40420f0000000000160014829e2dbcf6b7f31bc93633971f71f6f6b9b5f89e0247304402200f8e3e573be749caf1964a85707bf540de2e7b367ae46c23bd4f21932ff82346022062dc3007072cd5a19b45e479525f4829bc48be4fd3c21b5a9ae34bcf9a3a3ccf0121020f88c7db36cbb492e80d3062fc19db55bed82687498f8cfe6d0cf47adf6687aa49f31b00

Compressed and Base64 encoded:

eNpdkNmJBEAIRFPyarsNxzP/ENZZGBZW9Eew6pVA8C0EbGObIG4zw47Ie6bg5WUtZ0pHKnsuMxiv7cLOHFU0ut2yCl+xqfktoAA3cPiz0R0hbBqzGxzF2nS5PZ31lL+Dx/mZiHgLCMH8v35kTRU5GncYI42V2S7Otu7W4syzBpLLIAK0Mec197kcfcXSB01lzS7cmCNQTb04etdUk5ZLhtCYZh6x6EdDqZIB9oSyjqOFnJZrh858oCLlRcsUJ2EcN2+WxTRn58wBJITN8/altV4ZIUb9oHi1J9EqzomuR/qW8s3LaS3Ikes1ult3sU9mgB8sW3Vy

compression-screenshot

java – SevenZipCompressor code that uses Commons-Compress to compress and uncompress 7zip archives

I wrote a Compressor utility class to generate 7zip files containing everything in a specific directory. The problem is that it’s an order of magnitude slower than both archiving the directory using the native Windows 7zip.exe AND archiving to a .zip file using a ZipCompressor that uses the same algorithm but with the default Java zip classes.

/*
 * SevenZipCompressor.java
 *
 * Date 20/08/2020
 *
 * Copyright Ikan Software N.V. 2003 - 2020, All Rights Reserved.<br><br>
 *
 * This software is the proprietary information of Ikan Software N.V. .
 * Use is subject to license terms.
 */
package lib.util.compressors.sevenzip;

import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import lib.util.compressors.Compressor;
import lib.util.compressors.CompressorException;
import lib.util.compressors.Entry;

import org.apache.commons.compress.archivers.sevenz.SevenZArchiveEntry;
import org.apache.commons.compress.archivers.sevenz.SevenZFile;
import org.apache.commons.compress.archivers.sevenz.SevenZOutputFile;

import java.util.ArrayList;
import java.util.Date;
import java.util.HashMap;
import java.util.Iterator;
import java.util.List;
import java.util.Map;

/**
 * The SevenZipCompressor class supplies a simple way of writing 7zip files.
 * 
 * @author nak
 */
public class SevenZipCompressor extends Compressor {
    
    /**
     * @see lib.util.compressors.Compressor#compress
     */
    public void compress(String fileName, String dirName) throws CompressorException {

       
            // Zip the directory
            File sevenZipFile = new File(fileName);
            try (SevenZOutputFile out = new SevenZOutputFile(sevenZipFile)){
                // loop over all files and add them plus their content recursively to the 7zip archive
                // Zip the directory
                File dir = new File(dirName);
                compress(sevenZipFile, out, dir, dir);

            } catch (IOException e) {
                throw new CompressorException(e);
            }        
    }
    
    

    /**
     * @see lib.util.compressors.Compressor#uncompress
     */
    public void uncompress(String fileName, String dirName) throws CompressorException {
        // Open the zipfile
        try(SevenZFile zipFile = new SevenZFile(new File(fileName));) {

            
            // Get the size of each entry
            Map<String, Integer> zipEntrySizes = new HashMap<String, Integer> ();
            Iterable<SevenZArchiveEntry> e = zipFile.getEntries();
            for(Iterator<SevenZArchiveEntry> iterator = e.iterator(); iterator.hasNext();) {
                SevenZArchiveEntry zipEntry = (SevenZArchiveEntry) iterator.next();
                zipEntrySizes.put(zipEntry.getName(), Integer.valueOf((int) zipEntry.getSize()));
            }


            // Start reading zipentries
            SevenZArchiveEntry zipEntry = null;
            while ((zipEntry = zipFile.getNextEntry()) != null) {
                
                // Zipentry is a file
                if (!zipEntry.isDirectory()) {

                    // Get the size
                    int size = (int) zipEntry.getSize();
                    if (size == -1) {
                        size = ((Integer) zipEntrySizes.get(zipEntry.getName())).intValue();
                    }

                    // Get the content
                    byte() buffer = new byte(size);
                    int bytesInBuffer = 0;
                    int bytesRead = 0;
                    while (((int) size - bytesInBuffer) > 0) {
                        bytesRead = zipFile.read(buffer, bytesInBuffer, size - bytesInBuffer);
                        if (bytesRead == -1) {
                            break;
                        }
                        bytesInBuffer += bytesRead;
                    }

                    String zipEntryName = zipEntry.getName();
                    // replace all "" with "/"
                    zipEntryName = zipEntryName.replace('\', '/');

                    // Get the full path name
                    File file = new File(dirName, zipEntryName);

                    // Create the parent directory
                    if (!file.getParentFile().exists()) {
                        file.getParentFile().mkdirs();
                    }

                    // Save file
                    FileOutputStream fos = new FileOutputStream(file.getPath());
                    fos.write(buffer, 0, bytesInBuffer);
                    fos.close();

                    // Set modification date to the date in the zipEntry
                    file.setLastModified(zipEntry.getLastModifiedDate().getTime());
                }
                // Zipentry is a directory
                else {

                    String zipEntryName = zipEntry.getName();
                    // replace all "" with "/"
                    zipEntryName = zipEntryName.replace('\', '/');

                    // Create the directory
                    File dir = new File(dirName, zipEntryName);
                    dir.setLastModified(zipEntry.getLastModifiedDate().getTime());
                    if (!dir.exists()) {
                        dir.mkdirs();
                    }
                }
            }
        } catch (IOException ioe) {
            throw new CompressorException(ioe);
        }
    }

    /**
     * @see lib.util.compressors.Compressor#getEntries
     */
    public List<Entry> getEntries(String fileName, boolean calculateCrc) throws CompressorException {

        // List to return all entries
        List<Entry> entries = new ArrayList<Entry>();
        
        try {

            // Open the zipfile
            SevenZFile zipFile = new SevenZFile(new File(fileName));

            // Get the size of each entry
            Iterable<SevenZArchiveEntry> iterable = zipFile.getEntries();
            for(Iterator<SevenZArchiveEntry> iterator = iterable.iterator(); iterator.hasNext();) {
                SevenZArchiveEntry zipEntry = iterator.next();
                Entry entry = new Entry();
                entry.setName(zipEntry.getName());
                if (calculateCrc) {
                    entry.setCrc(zipEntry.getCrcValue());
                } else {
                    entry.setCrc(-1);
                }
                entry.setDirectory(zipEntry.isDirectory());
                // 7z is a very Windows specific format, using NTFSTimestamps instead of Java time.
                entry.setTime(zipEntry.getLastModifiedDate().getTime());
                entry.setSize(zipEntry.getSize());
                entries.add(entry);
            }

            // Close zipFile
            zipFile.close();
        
            // Sort entries by ascending name
            sortEntries(entries);
            
            // Return entries
            return entries;
            
        } catch (IOException ioe) {
            throw new CompressorException(ioe);
        }
    }
    
    /**
     * Add a new entry to the zip file.
     * 
     * @param zos the output stream filter for writing files in the ZIP file format
     * @param name the name of the entry.
     * @param lastModified the modification date 
     * @param buffer an array of bytes
     * @throws IOException
     */
    private void addEntry(SevenZOutputFile zos, String name, File inputFile, byte() buffer) throws IOException {
        SevenZArchiveEntry zipEntry = zos.createArchiveEntry(inputFile, name);
        if (buffer != null) {
            zipEntry.setSize(buffer.length);
        } 
        zipEntry.setLastModifiedDate(new Date(inputFile.lastModified()));
        zos.putArchiveEntry(zipEntry);
        if (buffer != null) {
            zos.write(buffer);
        }
        zos.closeArchiveEntry();
    }

    /**
     * Zip the files of the given directory.
     * 
     * @param zipFile the File which is used to store the compressed data
     * @param zos the output stream filter for writing files in the ZIP file format
     * @param dir the directory to zip
     * @param relativeDir the name of each zip entry will be relative to this directory
     * @throws FileNotFoundException
     * @throws IOException
     */
    private void compress(File zipFile, SevenZOutputFile zos, File dir, File relativeDir) throws FileNotFoundException, IOException {

        // Create an array of File objects
        File() fileList = dir.listFiles();

        // Directory is not empty
        if (fileList.length != 0) {

            // Loop through File array
            for (int i = 0; i < fileList.length; i++) {

                // The zipfile itself may not be added
                if (!zipFile.equals(fileList(i))) {
                    // Directory
                    if (fileList(i).isDirectory()) {
                        compress(zipFile, zos, fileList(i), relativeDir);
                    }
                    // File
                    else {
                        byte() buffer = getFileContents(fileList(i));
                        if (buffer != null) {
                            // Get the path names
                            String filePath = fileList(i).getPath();
                            String relativeDirPath = relativeDir.getPath();

                            // Convert the absolute path name to a relative path name
                            if (filePath.startsWith(relativeDirPath)) {
                                filePath = filePath.substring(relativeDirPath.length());
                                if (filePath.startsWith("/") || filePath.startsWith("\")) {
                                    if (filePath.length() == 1) {
                                        filePath = "";
                                    } else {
                                        filePath = filePath.substring(1);
                                    }
                                }
                            }

                            // Add the entry
                            addEntry(zos, filePath, fileList(i), buffer);
                        }
                    }
                }
            }
        }
        // Directory is empty
        else {
            // Get the path names
            String filePath = dir.getPath();
            String relativeDirPath = relativeDir.getPath();

            // Convert the absolute path name to a relative path name
            if (filePath.startsWith(relativeDirPath)) {
                filePath = filePath.substring(relativeDirPath.length());
                if (filePath.startsWith("/") || filePath.startsWith("\")) {
                    if (filePath.length() == 1) {
                        filePath = "";
                    } else {
                        filePath = filePath.substring(1);
                    }
                }
            }

            // Add the entry
            if (!filePath.endsWith("\") && !filePath.endsWith("/")) {
                addEntry(zos, filePath + "/", dir, null);
            }
            else {
                addEntry(zos, filePath, dir, null);
            }
        }
    }

    /**
     * Read the contents of a file for zipping.
     * 
     * @param file the File to read
     * @return an array of bytes
     * @throws FileNotFoundException
     * @throws IOException
     */
    private byte() getFileContents(File file) throws FileNotFoundException, IOException {

        FileInputStream fis = new FileInputStream(file);
        long len = file.length();
        byte() buffer = new byte((int) len);
        fis.read(buffer);
        fis.close();
        return buffer;
    }
}

I doubt that Commons-Compress is so significantly slower than just the native 7zip.exe written by Igor Pavlov when used properly, so any feedback on how to speed this up is appreciated. For reference: a directory with 900 MB of data that would take roughly 45 seconds to compress using 7zip.exe takes over 5 minutes with this code.

design – Is it reliable to compress database backups with git?

I’m working as an intern at a fund. I spent the last month building a website for internal use, and now I think it’s a good time to set up a backup scheme for the MySQL database at its backend. Funny enough, my mentor is reluctant to get me another server because we are not allowed to use external IaaS like AWS and DigitalOcean for security reasons, and it takes weeks to get a usable server from our IT department. Thus, I’m planning to make the backup on the same server running my website. Yeah, I understand the data would be gone if the disk fails or the server brows up, but at least it would be a lifesaver in case of an accidental DROP DATABASE production;. By the way, all “servers” assigned by IT appear to be VPS running on the same physical machine, so I guess backing up on another virtual server can’t protect the data against a disk/server/power failure after all?

Anyway, here is my local backup plan: I will run mysqldump and commit it to a local git repository every minute. More concretely, I have set up a cronjob to run the following script with */1 * * * *. Essentially I’m using git as an incremental compression tool, rather than a VCS.
I have also included some tags for easy navigation.

#!/usr/bin/env bash
export BACKUP_DIR=/home/foo/Backups/bar
mysqldump 
    --defaults-extra-file=/home/foo/Developer/MYSQL_ROOT.cnf 
    --single-transaction 
    --extended-insert=FALSE 
    production | sed '$d' > $BACKUP_DIR/production.sql
git -C $BACKUP_DIR commit --all -m 'Auto backup via cron' > /dev/null
git -C $BACKUP_DIR tag -f `date +%F`
git -C $BACKUP_DIR tag -f `date +%F@%H`
git -C $BACKUP_DIR tag -f `date +%F@%H-%M`

Currently, the backup script takes about a second to complete, and the resulting production.sql is around 3MiB in size. I estimate it would stay under 10MiB for years. The website in question has ~20 users, and I won’t expect more than 1000 requests per day. I’m using MySQL Community Server 8.0.21 on RHEL 7, without any Enterprise subscription.

Can I make a reliable local backup this way? Is one lightweight tag per minute too much? Is there a better alternative?

linux – How to compress log which is differed by name and date

Does anyone know how to compress log if the log contains date. For example:

/var/log/example/examplelog-20202905.log

/var/log/example/examplelog-20203005.log

/var/log/example/examplelog-20203105.log

/var/log/example/examplelog-20200106.log

/var/log/example/examplelog-20200206.log

Or does anyone know how to compress every 10 logs to 1 gz file

Can the logrotate handle this?

Thank you

coinbase transaction: a bitcoin fork idea to help compress the blockchain

I have thought of a soft fork that can help with storage costs.

Why don't we force miners to embed the height of the TX Merkle tree into the first two bytes of the 4-byte block header version?

It would have two advantages:

  • It would fix the brute-force weakness of the leaf node (CVE-2017-12842), which is currently only fixed by standardization rules but not by consensus rules.

  • Similar to the description in BIP 141, we can enter a node type that is not a pruned node or a full full node, but would have txindex=1 for transactions with unspent exits. Those nodes would first store entire blocks. When the next block arrives, using their txindex they would search for the block, find the transaction, and check if all exits are spent. If so, they would remove the transaction from that block's storage but only keep its hash. This would save a lot of space since it seems to me that most query scenarios of a txindex enabled node would use gettransaction in transactions with unspent exits?

Any comment? I don't think it's worth sending this to bitcoin-ml.

Compress and encrypt file systems for backups

I am currently backing up the remote OS while maintaining permissions using rsync:

rsync -aAX --numeric-ids --delete ... root@X.X.X.X:/ /backup/server/

Now I want to back up the backup to a third party. Currently I have:

tar --xattrs -czpf - "/backup/server/" | openssl enc -aes-256-cbc -a -salt -pass pass:"$KEY" -out "/backup/server.encr"

Which will compress and encrypt while retaining permissions.

Eventually I'd like to write this to Go, but preserving permissions seems a bit tricky. Is there an alternative in a library to do this?

Magento2 How to compress CSS and JS

I checked my website on Gtmetrix, it is asking to compress CSS and Js files.

How I can get this?

Linux: compress directory to split files of the same size, individually depressed files

I need to transfer a 1G folder over the network, and the data formats I send and receive are all under my control. To speed up data reception, I do this now:

  1. Before transferring, compress the 1G folder and then transfer it.

  2. after downloading everything, unzip it.

It may reduce some time because the amount of transferred data is reduced, but it also requires decompression time. Is it possible to compress one folder into many files of the same size, download one file and unzip one file, and when all files are unzipped will it be the initial folder? My question is:

  1. Can this be accomplished?
  2. How can I unzip the file while downloading it?
  3. how to reduce download and decompression time?

python: How to compress images with neural network AutoEncoders?

I wanted to create an image compressor using Machine Learning and I started working on an "AutoEncoder". This is a type of neural network that takes the image and creates a compressed vector shape.

It has two parts: encoder and decoder. encoder Convert images into vector form. decoder try to recreate the image only from the vector created by the encoder. This is what it looks like ==>

AutoEncoder

I have made the encoder a stack of convolutional layers along with some of MaxPooling2D. However, there is only one small problem.

The model works well on any image, but can be significantly improved.

For starters, the coding dimension is too tall! Right now with my calculations, I'm getting a compression of 5 Mb ===> 5Kb which very lossy as the compression factor becomes x1000

CODE =>

import os
import tensorflow as tf
from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D
from keras.models import Model
from keras import backend as K
import numpy as np
from keras.callbacks import ModelCheckpoint
import matplotlib.pyplot as plt

images_dir = "/content/drive/My Drive/Images/" # /{}.jpg".format(i)
EPOCHS = 50

image_generator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255)
train_data_gen = image_generator.flow_from_directory(directory=str(images_dir),
                                                     batch_size=16,
                                                     shuffle=True,
                                                     target_size=(600, 400),
                                                     class_mode='input')
#*****************************
# ENCODER STARTS HERE
#*****************************

input_img = Input(shape=(600, 400, 3))  # adapt this if using `channels_first` image data format

x = Conv2D(32, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(64, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)

# *************************
#  DECODER STARTS HERE 
#****************************

x = Conv2D(32, (3, 3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(64, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(64, (1, 1), activation='relu')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(3, (3, 3), activation='sigmoid', padding='same')(x)

checkpoint_dir= "/content/drive/My Drive/Checkp_autoenc/"
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")

checkpoint = ModelCheckpoint(filepath=checkpoint_prefix,
                             save_weights_only=True)

autoencoder = Model(input_img, decoded)

autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')

autoencoder.summary()

autoencoder.fit_generator(train_data_gen,
                epochs=EPOCHS,
                shuffle=True,
                callbacks=(checkpoint))

decoded_imgs = autoencoder.predict(train_data_gen)

I had decided on the filter and core values ​​on a whim. But they can be adjusted to encompass more information and lower the compression factor. As an example, this is what a sample image looks like: –

As you can clearly see, this is very lossy and pixelated because the compression factor is really high.In addition, convolutional layers can also be flattened and the non-linearization function ReLu It can be applied

But I want some expert feedback to determine what areas should really be improved and what should be left alone. So I appreciate some constructive comments and advice!

Very lossy!

Health!

linux – Compress repeating files from one directory and save them to another folder

This is an example of what you should do:
Claro-11.log (file to compress)
Claro-12.log (file created)
personal-11.log movistar-11.log (file to compress)
movistar-12.log

The problem I have is that I must compress the oldest files, but I do not know the creation dates and the data that tells me which is the oldest is only the line after the script.

REGEX = ^ (a-z) (a-zA-Z)– (0-9).log
for file in ls -v $FOLDERPATH | grep $REGEX | awk -F'-' '{print $1}' | sort -u
do
            rm ls -v $FOLDERPATH$file* | head -n -1
donate
I did this to practice and see if it eliminates them, but it doesn't work for me, I don't know how to face it