Present a hierarchal view (tree) containing sequential and parallel tasks

How would you present a hierarchal view (tree) containing sequential and parallel tasks?

  1. The tree can contain two levels of groups when each group can
    contain one or more tasks
  2. A specific group can contain either sequential tasks or parallel tasks
  3. The group could be collapsed and I still think we need a
    sequential/parallel indication in that case

parallelism – Processing messages in parallel with Kafka & Akka

We have data in Kafka which we need to export to multiple destinations. Each message key is to be exported to one destination.

A destination can be a REST endpoint, a file, a database etc.

Each exporter can have its own speed or rate limits and one exporter should not slow down the other.

In Kafka, the parallelism is dependent on the number of partitions, rather on the no. of messages.

Approach #1

We decided to use Akka where we read each message from the Kafka topic and tell to the exporter actors each of which will export to their respective destination like REST, file, database etc.

Problem: At-most once semantics only. The problem here is that, we have to commit the messages in Kafka. When we tell to an actor, we do not know, whether that actor has processed that message or not. It may still lie in the mailbox and we may commit that message. These committed messages are not read again after process restart.

while(true) {
    consumer.poll().forEach( record -> { 
        getExporter(record.key()).tell(record, ActorRef.noSender()));

Approach #2

Read each message, store it in a persistent file, export it and remove it from the persistent file after export.

We need a persistent actor for this, for whom we need to tell to. So, we may use ask for this and wait till the actor puts it into the map and then tell it to the exporter.

Are there any better ways of doing this? Are there any reference architectures?

cpu: are Geekbench multi-core scores done by embarrassingly parallel processes?

For each processor, benchmarks at present single-core and multi-core scores. For example, this 64-core processor has a single-core score of 1220 and a multi-core score of 23688. Is the Geekbench multi-core benchmark measured by (Case 1) running multiple single-core benchmarks in multiple cores at once (embarrassingly parallel))? Or (Case 2) is a single set of executed benchmarks, which could involve processes that do not parallel as well.

For the 64-core CPU example above, multicore performance does not achieve the score of 64×1220 = 78080 that we might expect from linear scaling of single-core behavior. In case 2, this difference could be expected due to the lack of perfect parallelization of the reference points. However, in Case 1, the difference must be inherent in the processor, which cannot deliver full single-core performance on all cores simultaneously, presumably due to thermal management issues.

looking for a dedicated server for parallel processing

I am using a self-encoding program that needs parallel processing. I already tried some cpus of aws and all of them can't handle this … | Read the rest of

online sharepoint: javascript for loopback problems with parallel ajax calls

I am using one for loop in javascript for multiple ajax calls but the value of i does not work as expected in the loop. I have checked this by doing async: both false and true. The value of i in the block comes as 7, although it should be 0 for the first iteration. Please attend.

        var Results = (6);
        for( var i=0;i

postgresql – Calculation of forecast data in parallel

I'd like to ask about how to compute forecast data in parallel in a relational database like PostgreSQL, which seems like a very typical problem when making forecasts. Let's say we are trying to get the average sales rate for the last X hours for each item we have, and our data looks like this

Table: items

 item_id  | avg |
 86401    | tbd |
 1234     | tbd |
 22779195 | tbd |

Table: sales

 item_id | qty_sold | time
 86401   | 5        | 2020-01-01T00:00:00
 1234    | 5        | 2020-01-01T00:00:00
 86401   | 2        | 2020-01-01T21:04:04

A query to get the average sales rate of an item would be simple (forgiveness syntax), like

SELECT item_id, avg(qty_sold)
FROM sales WHERE item_id=86401 BETWEEN '2020-01-01T00:00:00' AND '2020-01-01T23:59:59';

But how would you do that for each item efficiently and quickly and then save that data in the items table in the avg column for quick reference later? And besides, let's say the items could be in different locations, and you had to get the sales rate for each item for each location, what would the query look like?

I guess I don't understand how to think about SQL because as far as programming is concerned it's just like iterating all the elements in the items table and do that query, but that is not the right way to think when doing SQL. With locations in the mix, it would iterate through the elements and would do so for each location the element is in, such as a nested for loop, which is terrible performance. I would like help to understand why this looks like a pattern that would often arise when working with forecast data.

parallel computing – Estimation of P in Amdahl's Law theoretically and in practice

In parallel computing, Amdahl's law is mainly used to predict the theoretical maximum acceleration for program processing using multiple processors. If we denote the velocity by S, then Amdahl & # 39; s is given by the formula:

S = 1 / ((1-P) + (P / N)

where P is the proportion of a system or program that can be paralleled, and 1 P it is the proportion that remains in series. My question is how can we calculate or estimate P for a given program?

More specifically, my question has two parts:

How can we calculate P theoretically?
How can we calculate P in practice?
I know my question might be easy, but I am learning.


Are Core's blockchain readings parallel?

Does Core have a mutex that allows one read at a time from the blockchain, or can the readings be parallel? Otherwise, it is recommended to use the HTTP REST API with parallel queries, am I correct?

geometry: how can I prove that MN is parallel to AC?

enter the image description here enter the image description here

So far, I have been able to demonstrate that M, I, N are collinear and AA1 is perpendicular to B1C1. I have also tried to prove the result using the brianchon axis / theorem but to no avail. Can someone help me with this? Any help is really appreciated!

c ++ – simple parallel download using a connection group class using cpprestsdk

The following is a simple class to establish multiple http connections, mainly to download a list of small files:


using namespace utility;                    // Common utilities like string conversions
using namespace web;                        // Common features like URIs.
using namespace web::http;                  // Common HTTP functionality
using namespace web::http::client;          // HTTP client features
using namespace concurrency::streams;       // Asynchronous streams

    class ConnectionPool
    ConnectionPool(size_t nWorkers, std::wstring baseUri) :BaseUri(baseUri)
        for (size_t i = 0; i < nWorkers; i++) Pool.emplace_back(http_client(baseUri), http_request(methods::GET));

    void ResetState(size_t nWorkers, std::wstring baseUri)
        BaseUri = baseUri;
        nDone = 0;
        for (size_t i = 0; i < nWorkers; i++) Pool.emplace_back(http_client(baseUri), http_request(methods::GET));

    void ResizePool(size_t nWorkers)
        Pool.resize(nWorkers, { http_client(BaseUri) , http_request(methods::GET) });

    void DownloadAsync(std::vector Uris, const Function& f)//Not implemented
        WorkItems = Uris;
        const size_t limit = (std::min)(Pool.size(), WorkItems.size());
        for (size_t i = 0; i < limit; i++) assignWork(i, f);

    void DownloadSync(const std::vector Uris, const Function& f)
        std::wcout << "*DownloadSync Started*" << std::endl;
        WorkItems = Uris;
         for (size_t i = nDone = 0, limit = nActive = std::min(Pool.size(), WorkItems.size()); i < limit; ++i) assignWork(i, f);

        std::unique_lock lk(m1);
        cv.wait(lk, (&)() { return nActive == 0; });
        std::wcout << "*DownloadSync Ended*" << std::endl;

    void assignWork(int pidx, const Function& f)
        //m2 isn't needed, right?!
        if (nDone >= WorkItems.size())
            std::lock_guard lk(m1);
        const auto wItem = WorkItems(nDone);
        int cIdx = nDone;

        std::wcout << L"Worker " << pidx << L": Assigning/t" << wItem << L" succeed" << std::endl;
        auto& (client, request) = Pool(pidx);

        client.request(request).then((=)(pplx::task   responseTask) {
            try {
                if (auto response = responseTask.get(); response.status_code() == http::status_codes::OK)
                    f(response, cIdx);
                    std::wcout << L"Worker " << pidx << L": Downloading/t" << wItem <> Pool;
    std::vector WorkItems;
    std::wstring BaseUri;
    std::mutex m1/*,m2*/;
    std::condition_variable cv;
    std::atomic nActive = 0, nDone = 0;

int main()
ConnectionPool con(n, L"base url");
        con.DownloadSync(urls, ()(http_response res, int idx)
                auto outFile = fstream::open_ostream(std::to_wstring(idx) + L".ext").get();