design – How to handle pagination in a stateless application having multiple components involved for the data?

It seems like the only problem is the slowness but its not clear why it is slow.

You have two possible approaches

Approach A, get all the data, but faster. Do this if the volume of data isn’t a problem, its just looping through all the calls which slows things down.

  1. Can you turn off pagination in the sub services?
  2. Can you requests all the pages at the same time asynchronously?
  3. Can you cache stuff?

Approach B, stream the data to the UI in smaller chunks. Do this if the data volume is a problem

  • get (n) items
  • get its children/extra data
  • send to UI
  • update UI
  • repeat

You will probably want a combination of approaches, it sounds from your question like you are processing the data synchronously, its a simple step to stop doing that and request pages 1,2,3,….. etc at the same time rather than processing page 1 before moving to page 2 but this will likely overload your services unless you have some sort of rate limiting.

Similarly taking approach B can lead to very synchronous code.

Your code needs to be aware of the rate limit at which the sub services can provide data.

If they have capacity, request the next n items and start work on them. Even if you haven’t sent the first ten yet.

If they don’t have capacity, don’t ask them for out of order data which you wont be able to send because the first chunk isn’t finished.

Avoid waiting for that one last child item before requesting the next ten items but also avoid requesting the next 1000 items when you haven’t finished with the first 10 yet!

Obviously you also have Approach C, request everything at once, send the main items without child items and then update the UI with the children later.

I wouldn’t normally go for this approach as presumably the UI needs that extra info to perform its function. Showing the parent and later showing the children is just a different kind of spinner

magento2 – Get data from DB (load method) Magento 2

I have controller, template, view.
I need using load method, but i don`t know how i do that

        $post = $this->collectionFactory->create();
    $this->collectionFactory->load($post, 'id');
    return $this;

This code in controller don`t work. I have error
How i can get data from load() method ?(load method only)

data leakage – Compiler-induced information leaks/side-channels in cryptography implementations

In Cryptography Engineering Ferguson, Schneier and Kohno put a big emphasis on quality of code in order to prevent it from leaking information and from being vulnerable to memory corruption exploits.

Re-implementing cryptography, especially when open source libraries are already available, widely used and scrutinized; is usually said to be a recipe for disaster, but some times serious vulnerabilities are found on those. As a result some projects aim to simplify and clean them from bad and unused code to reduce the attack surface. Also, thanks to its aim to make it hard for programmers to write vulnerable code, rewriting some algorithms or protocols in Rust could also seem like a good idea.

However, even if top programmers with an ideal cryptography background manage to write perfect code, compilers in their default state still have a slight tendency to take instructions as suggestions rather than orders in the name of optimization and security.


Now, my doubts are the following:

  • What kinds of information leaks and side-channels could be caused by compilers alone?
  • What specific compiler features cause them and must be switched off to prevent them?

And, most importantly:

  • How can one check the resulting binaries accurately to make sure those side-channels and leaks are not present?

java – Code that lists and inserts data to database: Make methods more reusable in superclass

I’m trying to make some methods more reusable and have now three classes that work together: AbstractDao (the superclass), MemberDao (extends AbstractDao) and ProjectTaskDao (extendsAbstractDao). The method insert is very similar in both subclasses with some minor differences:

insert method in MemberDao:

public void insert(Member member) throws SQLException {
try (Connection connection = dataSource.getConnection()){
    try(PreparedStatement statement = connection.prepareStatement(
            "INSERT INTO members (member_name, email) values (?, ?)",
            Statement.RETURN_GENERATED_KEYS
    )){
        statement.setString(1, member.getName());
        statement.setString(2, member.getEmail());
        statement.executeUpdate();

        try (ResultSet generatedKeys = statement.getGeneratedKeys()) {
            generatedKeys.next();
            member.setId(generatedKeys.getLong("id"));
        }
    }
}

}

insert method in ProjectTaskDao:

public void insert(ProjectTask task) throws SQLException {
try (Connection connection = dataSource.getConnection()){
    try(PreparedStatement statement = connection.prepareStatement(
            "INSERT INTO project_tasks (task_name) values (?)",
            Statement.RETURN_GENERATED_KEYS
    )){
        statement.setString(1, task.getName());
        statement.executeUpdate();

        try (ResultSet generatedKeys = statement.getGeneratedKeys()) {
            generatedKeys.next();
            task.setId(generatedKeys.getLong("id"));
        }
    }
}

}

Thesee methods do the same with some minor differences with the “statement.setSomething()”. I`m not that advanced in Java yet so I’m kind of stuck on this. I would also like to pull these methods in as well:

list method in MemberDao:

public List<Member> list() throws SQLException {
List<Member>  members = new ArrayList<>();
try (Connection connection = dataSource.getConnection()) {
    try (PreparedStatement statement = connection.prepareStatement("SELECT * FROM members")) {
        try (ResultSet rs = statement.executeQuery()) {
            while (rs.next()) {
                Member member = new Member();
                member.setName(rs.getString("member_name"));
                members.add(mapRow(rs));
            }
        }
    }
}
return members;

}

list method in ProjectTaskDao:

public List<ProjectTask> list() throws SQLException {
List<ProjectTask>  tasks = new ArrayList<>();
try (Connection connection = dataSource.getConnection()) {
    try (PreparedStatement statement = connection.prepareStatement("SELECT * FROM project_tasks")) {
        try (ResultSet rs = statement.executeQuery()) {
            while (rs.next()) {
                //ProjectTask projectTask = new ProjectTask();
                //projectTask.setName(rs.getString(""));
                tasks.add(mapRow(rs));
            }
        }
    }
}
return tasks;

}

And here is the whole superclass as it stands currently:

public abstract class AbstractDao<T> {
protected final DataSource dataSource;

public AbstractDao(DataSource dataSource) {
    this.dataSource = dataSource;
}

protected T retrieve(Long id, String sql) throws SQLException {
    try (Connection connection = dataSource.getConnection()) {
        try (PreparedStatement statement = connection.prepareStatement(sql)) {
            statement.setLong(1, id);
            try (ResultSet rs = statement.executeQuery()) {
                if (rs.next()) {
                    return mapRow(rs);
                } else {
                    return null;
                }

            }
        }
    }
}

protected abstract T mapRow(ResultSet rs) throws SQLException;

}

Let me know if you need more information to help! Thanks 😀

I am a data entry expert with the high accuracy for $50

I am a data entry expert with the high accuracy

Hello Sir,My name is Mati-ur-rehmanI am writing to you for the position of data entry that you are looking for urgent basis. Data entry job is my passion and I have 4+ years of experience. But I am new in freelance market place and I do it with my heart and soul.My typing speed is a great assets for your project and I am playing with MS office since last 4-5 years In company of Berger paints. I have a good market research skill that will assets you to do something innovative in your projectI am friendly to talk and be connected with my client via a preferred method they ask.Thank you best RegardMati-ur-Rehman

.

transaction input – How can the segwit witness data be “off-chain”? What does it really mean?

No that’s not accurate, witness data is on-chain, inputs have a similar byte length as before, they are just weighed differently, and the transaction throughput is increased because segwit is a defacto blocksize increase.

Thus, with segwit, the witness data was separated from the rest of the input. More specifically, the witness data is now “off-chain”. This made the input much lighter and, in turns, it made spending an UTXO cheaper and faster to process.

This is a common misunderstanding perpetuated by an abundance of (sometimes deliberately) confusing descriptions of how segwit works.

A transaction is not complete without the proof that it was authorized by the owner of the spent funds. As such, the witness is explicitly part of a “complete transaction”. What segwit did was to segregate the witness (read “signature”) out of the input script and move it to the “witness section” of the transaction. The witness section is at the same hierarchical level as the inputs and outputs.

The witness section is excluded when calculating the transaction id (txid), but it is part of the transaction, and used to calculate the witness transaction id (wtxid). While the merkle root in the blockheader commits to the txids of the included transactions, each segwit block additionally commits to a merkle tree of the transactions’ wtxids. In conclusion, the witnesses are a) part of the transaction, b) part of the blockchain, c) necessary to fully validate the blockchain.

Segwit replaced the blocksize limit with a blockweight limit. The blocksize was based on the raw byte length of transactions and capped at 1,000,000 bytes. The blockweight limit is capped at 4,000,000 weight units, where weight is calculated by counting witness bytes with a factor of one, and non-witness bytes with a factor of 4. This happens to result in an equivalent limit if a block only includes transactions without witness data.

The actual transaction data of a segwit input compared to a non-segwit input is only marginally smaller. It is the discount of witness data that allows blocks to exceed the previous blocksize limit, making segwit an effective blocksize increase with the biggest block to date having 2,422,858 bytes (but 3,993,379 weight units).

Segwit transactions and blocks are made forward compatible to pre-segwit nodes. Segwit nodes will strip the witness data before relaying the data to pre-segwit nodes. The stripped transactions and blocks are non-standard but valid according to the pre-segwit protocol rules, and thus pre-segwit nodes can follow the blockchain and converge on the same UTXO set.

Note that pre-segwit nodes have not been “fully validating nodes” since segwit was activated on August 24th 2017 as they do not unilaterally enforce all consensus rules of the Bitcoin protocol.

post processing – Can the “Dust Delete Data” function on Canon DSLR be used with Lightroom?

Though Canon doesn’t disclose their Dust Delete algorithm, Canon Digital Learning Center published, several years ago, this quick tutorial on how to use such file to perform dust deletion with a third-party RAW processor – in case they offer such a similar option.

Amazingly enough, the article’s author, though obviously writing under Canon’s assignment, selflessly points out that such procedure can even be accomplished with files from competitors’ cameras. Though we known that, the fact he chose not to ignore it, is as commendable as the article itself, IMO.

database design – Data Schema For Stock Control / Multi Source Inventory

I’m working on a project that involves stock control with multiple stock sources and sales channels. The overall hierarchy I’ve got so far looks like this;

Sales Channels <---- Allocated Stock Sources <---- Stock Locations (warehouses) <--- Stock Sub Locations <---- Shelf / Bin Locations

As far as rules go for how these entities relate to each other I’ve come up with this;

The system must have one or more sales channels, each sales channel
must have 1 or more stock sources, a stock source must have 1 or more
stock locations (warehouses / buildings / distribution centres ), a
stock location may have 1 or more Bin/Shelf locations.

A product may have 1 or more stock locations, may have one or more sub
locations in those stock locations and may have one or more Shelf / Bin
location.

First off, is this a solved problem where some reference schema exists I could utilise and save myself some headaches?

If there isn’t a reference design for this situation, Am I best to build a 1 to Many relationship to assign stock to a sub location(s) and another 1 to Many relationship for Shelf Locations (if exists)

unity – Any good services to store player data (both confidential and non-confidential)?

Are there any good places to store player data (both confidential and non-confidential)? I want to store passwords non-locally so PlayerPrefs is out of the question. Is there a way to store data for an email and password? E.g said player logs in with username and password to PlayFab, and I use that login info to login to a data store and get saved weapon, profile picture, etc. Would Firebase work for my situation, since that is free? I tried using PlayFab’s player data system but that is only for JSON string key-value pairs, and I would like to store arrays and structs alongside strings. If it helps, this is a PC game I am talking about.

transaction input – How can the segwit witness data can be “off-chain”? What does it really mean?

No that’s not accurate, witness data is on-chain, inputs have a similar byte length as before, they are just weighed differently, and the transaction throughput is increased because segwit is a defacto blocksize increase.

Thus, with segwit, the witness data was separated from the rest of the input. More specifically, the witness data is now “off-chain”. This made the input much lighter and, in turns, it made spending an UTXO cheaper and faster to process.

This is a common misunderstanding perpetuated by an abundance of (sometimes deliberately) confusing descriptions of how segwit works.

A transaction is not complete without the proof that it was authorized by the owner of the spent funds. As such, the witness is explicitly part of a “complete transaction”. What segwit did was to segregate the witness (read “signature”) out of the input script and move it to the “witness section” of the transaction. The witness section is at the same hierarchical level as the inputs and outputs.

The witness section is excluded when calculating the transaction id (txid), but it is part of the transaction, and used to calculate the witness transaction id (wtxid). While the merkle root in the blockheader commits to the txids of the included transactions, each segwit block additionally commits to a merkle tree of the transactions’ wtxids. In conclusion, the witnesses are a) part of the transaction, b) part of the blockchain, c) necessary to fully validate the blockchain.

Segwit replaced the blocksize limit with a blockweight limit. The blocksize was based on the raw byte length of transactions and capped at 1,000,000 bytes. The blockweight limit is capped at 4,000,000 weight units, where weight is calculated by counting witness bytes with a factor of one, and non-witness bytes with a factor of 4. This happens to result in an equivalent limit if a block only includes transactions without witness data.

The actual transaction data of a segwit input compared to a non-segwit input is only marginally smaller. It is the discount of witness data that allows blocks to exceed the previous blocksize limit, making segwit an effective blocksize increase with the biggest block to date having 2,422,858 bytes (but 3,993,379 weight units).

Segwit transactions and blocks are made forward compatible to pre-segwit nodes. Segwit nodes will strip the witness data before relaying the data to pre-segwit nodes. The stripped transactions and blocks are non-standard but valid according to the pre-segwit protocol rules, and thus pre-segwit nodes can follow the blockchain and converge on the same UTXO set.

Note that pre-segwit nodes have not been “fully validating nodes” since segwit was activated on August 24th 2017 as they do not unilaterally enforce all consensus rules of the Bitcoin protocol.