wallet.dat: pycrypto or libssl not found, decryption may be slow

His image shows two messages:

There is a message

ERROR:root:Couldn't open wallet.dat/main

It suggests that you should shut down Bitcoin if it is already running. I would use Windows Task Manager to see if bitcoin-qt, bitcoind etc. are running.Ctrl+Alt+Of the In Windows-10 you should open a menu from which you can select task-manager] You can close those processes from the task manager if necessary.

As your screenshot shows a second attempt without this error, it appears Botcoin-qt OK closed and the command completed successfully.


Y

WARNING:root:pycrypto or libssl not found, decryption may be slow

This is just a warning, it does not mean that the command will not work, just that it may take longer than normal.

The message suggests that pywallet will use slow internal crypto functions if it does not have access to a fast external library of crypto functions.


So your wallet.txt The file should show you the information you want to extract.

mongodb: Mongo slow start after update to 4.2.5

I have updated the MongoDB replica set, which consists of 3 members, from 4.0.11 to 4.2.5. After the update, the startup takes about 5 minutes. Before updating it was instant. It is related to the size of the oplog, because I tried the oplog crash in the new mongo 4.2 and the start was instantaneous. The maximum oplog size was 25GB, I reduced it to 5GB and the startup is still slow. Mongo db is on AWS with standard EBS disks. However, mongo worked fine until this update. Do you have any idea what a slow start can cause? Thank you.

Python: advice on taming a slow loop required for viewing large GIS data sets

enter the image description here

I am working to plot a large GIS dataset that I have shown a previous sample of about 1/6 of the data. I am happy with how fast the data loads and bokeh renders html almost instantly. However, I have come across a fairly active loop in my code that is not scaling well as I increase the 1) number of rows and 2) the resolution of the polygons. I just got killed in the #count points loop and I wonder if there isn't a better way to do this?

I found the suggestion for a loop from a GIS readthedoc.io and was happy with its performance for a few thousand points a couple of months ago. But now the project needs to process a GeoDataFrame with> 730000 rows. Am I supposed to use a better method to count the number of points in each polygon? I'm at a modern desk to do the calculation, but the project has access to Azure resources, maybe that's the majority of people who do this type of calculation professionally? I'd rather do the calculation locally, but it means my desktop might have to wait for maximum CPU cycles overnight or longer, which is not an exciting prospect. I am using Python 3.8.2 and Conda 4.3.2.

from shapely.geometry import Polygon
import pysal.viz.mapclassify as mc
import geopandas as gpd

def count_points(main_df, geo_grid, levels=5):
    """
    outputs a gdf of polygons with a columns of classifiers to be used for color mapping
    """
    pts = gpd.GeoDataFrame(main_df("geometry")).copy()

    #counts points
    pts_in_polys = ()
    for i, poly in geo_grid.iterrows():
        pts_in_this_poly = ()
        for j, pt in pts.iterrows():
            if poly.geometry.contains(pt.geometry):
                pts_in_this_poly.append(pt.geometry)
                pts = pts.drop((j))
        nums = len(pts_in_this_poly)
        pts_in_polys.append(nums)
    geo_grid('number of points') = gpd.GeoSeries(pts_in_polys) #Adds number of points in each polygon

    # Adds Quantiles column
    classifier = mc.Quantiles.make(k=levels)
    geo_grid("class") = geo_grid(("number of points")).apply(classifier)


    # Adds Polygon grid points to new geodataframe
    geo_grid("x") = geo_grid.apply(getPolyCoords, geom="geometry", coord_type="x", axis=1)
    geo_grid("y") = geo_grid.apply(getPolyCoords, geom="geometry", coord_type="y", axis=1)
    polygons = geo_grid.drop("geometry", axis=1).copy()

    return polygons

macos: first run after compilation is slow on mac

The first execution of a newly compiled program is very slow, the following executions are fast enough, I suspect it is related to the Mac trying to verify the binary, but I prefer to avoid this process since I compile files very frequently and I know it (in less than files compiled by myself) that are not harmful.

The most common scenario for me is the following:

  • Write a small program in c ++ (let's say hello simple world a.cpp)
  • Compile (g++ -std=c++11 a.cpp -o sol)
  • Run it the first time ./sol (takes in the order of 5 seconds)
  • Run it again ./sol (outputs instantly as expected)

I already saw some related questions, however I couldn't solve the problem:

dnd 5e – Does breaking the concentration of the slow spell interrupt its effects immediately?

On your turn, you break the concentration of a slow (enemy) spell with your weapon attack.

When exactly does the spell stop affecting you?

At the same time that the enemy spellcaster loses concentration or the next turn? With a different phrase: do you get your additional attacks, your bonus action and the second half of your movement back immediately, in the same turn?

performance tuning: mapping an expression on a 3D grid is very slow

I have a pretty big expression depending on $ R $, $ phi $ Y $ Z $ as you can see:

Exp

I need to evaluate this expression in the following 3D grid

grid = Table({(i+1/2)*5,(j+1/2)*5,k*0.0785},{i,0,119},{j,0,159},{k,0,79})

and I'm doing it with the ParallelMap:

result = ParallelMap(expression /. {R -> #((1)), (Phi) -> #((2)), Z -> #((3))} &, grid, {3})

However, this takes around 4 minutes, which is quite a bit, since I have to do this for many other great expressions. Is there a way to speed it up?

postgresql: the index is not used, the filter is too slow

I'm parsing the query that takes several minutes to execute and I want it to be a little bit faster:

EXPLAIN (ANALYZE, BUFFERS) 
SELECT * FROM filings AS filing
WHERE (filing.sources_id != 10 OR filing.action_date < NOW()::date)
AND (filing_type_id IN (4538,5080))
ORDER BY filing.action_date desc LIMIT 10

Answer:

                                                                                  QUERY PLAN                                                                                  
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.56..4124.25 rows=10 width=704) (actual time=128412.522..158528.958 rows=10 loops=1)
   Buffers: shared hit=177017 read=162030 dirtied=81 written=34435
   ->  Index Scan Backward using filings_action_date_idx on filings filing  (cost=0.56..6437894.65 rows=15612 width=704) (actual time=128412.429..158528.832 rows=10 loops=1)
         Filter: ((filing_type_id = ANY ('{4538,5080}'::integer())) AND ((sources_id <> 10) OR (action_date < (now())::date)))
         Rows Removed by Filter: 351983
         Buffers: shared hit=177017 read=162030 dirtied=81 written=34435
 Planning time: 5.454 ms
 Execution time: 158529.062 ms
(8 rows)

I don't understand why my index is useless and 300K row filtering is used instead

Filter: ((filing_type_id = ANY ('{4538,5080}'::integer())) AND ((sources_id <> 10) OR (action_date < (now())::date)))

although the index is in place:

"filings_filing_type_id_sources_id_action_date_idx" btree (filing_type_id, sources_id, action_date)

Instead, this index is in action:

"filings_action_date_idx" btree (action_date)

according to the consultation plan

Slow Shared Folder, Windows Host, Ubuntu Guest

Virtualbox 6 on Windows 10 host, Ubunutu 18.04 LTS guest

Shared folders work great, but they are slow on the Ubuntu guest side. I often use git on Ubuntu instead of Windows, so the delay can be quite noticeable with something like git status (5-10 seconds).

I used to do everything from my VM and use Samba to share files between the two, but due to some circumstances beyond my control (a working VPN that deletes even the private subnets), I can't do it anymore and I don't know of any other alternative other than shared folders.

folding at home – How much computing is required to "unfold" the crown? Why is it so slow?

I have seen various challenges in joining different Folding @ Home teams to develop the coronavirus and am a little surprised by the situation. I participated in Folding @ Home during the 90's and at that time they made good progress in different projects.

More than 20 years later, computers are thousands, if not millions (especially GPUs), times faster, and considering the engagement crown has sparked my spontaneous feeling, it should have been deployed in a day or so (almost in a matter of minutes as soon as the folding net became efficient), but that doesn't seem to be the case.

Can anyone explain the situation? How many flops (or whatever) does it take to "spread" a virus like the crown? How does the capacity of the folding net compare to that number? Why isn't this being done (considering the capacity of today's computers and the foldable network)?

(And feel free to add other relevant details that you might not have been aware you should have asked)

Slow Sony Vaio laptop using Ubuntu

Applications start slowly, mouse sometimes freezes for a long time.

ubuntu 19.10
3.7Gib
CPU Intel Core i5-2450 @ 2.50GHz x 4
Intel Sandybridge Mobile / AMD Turks
3.34.2 Gnome
64 bit
disk = 640.1GB