windows 10 – Blue Screen of Death after BIOS Update (Solved)

This is not anything I need help with, as I have solved the issue, but I have a general question that I would love some insight on. I am going to be upgrading my CPU to a Ryzen 5600X, and needed a BIOS update as such.
After the update I immediately went into the BIOS and began making all my changes, enabling DOCP, etc. That I do to increase performance. This apparently was a terrible idea, as I could not boot. Constant BSOD, usually after 1-3 seconds of loading system files. The errors that were reported were mostly Kernel and Driver issues. After it failed to boot 6 times or so, I attempted tried to do a system restore point. The PC crashed during this as well, the same Driver “SQL” error. I do not know exactly what was failing, but I know it was core system files.
After this I decided maybe I needed to boot once with default BIOS settings, then make the changes afterwards. I boot to BIOS, cleared the CMOS via the UEFI, and boot into Windows on the first try. Being a person of moderate Tech Literacy, I probably should have done the CMOS clear first, but I thought I would at least try the restore point. Just a waste of time.
I shut down the PC, and restart into the BIOS. Make my changes that I mentioned before, and boot successfully into Windows on the first try. No issue.
So first, If you are having BSOD issues after a BIOS update, your first step should be to clear CMOS. Second, does anyone know any more about why this may be happening? I would love some insight into this issue, why booting into a modified BIOS immediately out of the gate was a bad idea. Does Windows Kernel just need to boot once to get used to the new BIOS? Some other issue?

blockchain – Understanding Transactions, Mining, and the 10 minutes block solved

How do miners decide how many transactions they should have in a block in order to mine 1 bitcoin?

The amount of work miners do has no impact on the amount of Bitcoin they make. The block subsidy is a fixed amount, and transaction fees are determined from transactions. Mining 1 Bitcoin does not require some specific amount of work.

Miners determine how many transaction they should have by determining how much in transaction fees they stand to gain from including those transactions. This is typically done by ordering all known unconfirmed transactions by their fee rate. Then the miner just includes as many transactions that is allowed by the consensus rules, starting from the transaction with the highest transaction fees.

When miners compete to find the “nonce” that has the smallest possible hash number, would it be faster/easier to find the “nonce” if there are fewer transactions?

No. The number of transactions has no significant effect on the difficulty of a block or the amount of work required to mine a block. While including more transaction nominally requires computing more hashes, this is negligible in the grand scheme of things, especially since this calculation is only done once for a set of nonces and extraNonces.

Do all miners try to solve a solution where all the transactions should be the same in a single block and everyone’s trying to compete to finding the nonce and hash?

Everyone chooses their own set of transactions to include. Everyone is making their own block, just each block points to the same parent block.

Or is it random, and its just whoever finds a hash the quickest, saves those transactions and adds it to the blockchain?

It is entirely random. It’s however finds a block first and broadcasts it.

Also, I noticed looking at btc.com, there are blocks being mined less than 10 minutes back to back, I thought it needs to be 10 minutes to be mined?

The average time between blocks is 10 minutes, but that is an average, not a requirement. Blocks can be found in less than 10 minutes, or more than 10 minutes. The average block is ~10 minutes.

nt.number theory – Can co-joined Legendre diophantine equations be solved?

Legendre diophantine equations take the form

$n1* d1^2 + n2* d2^2 = n3* d3^2$

Where n1,n2,n3 are known integers and d1,d2,d3 are unknown integers.

A smallest solution will be found within a relatively small search space if it exists.

My question is with linked equations of the form

$n1* d1^2 + n2* d2^2 = n3* d3^2$

and

$n1* d1^2 + 2* n2* d2^2 = n4* d4^2$

Where n1,n2,n3,n4 are known and d1,d2,d3,d4 are not.

It is trivial to find independent solutions where n1!=n1 and n2!=n2 in the two equations so that can be a starting point.

This maps into the congruent number problem so a solution here solves congruent number problem.

As such a generalized proof that all non solutions will look like this is also valuable.

python – 01 Matrix is too slow when solved using DFS

I tried solving Leetcode 01 Matrix problem. It is running too slow when solved using DFS approach.

Given a matrix consists of 0 and 1, find the distance of the nearest 0 for each cell.
The distance between two adjacent cells is 1.
Example 1

Input:
((0,0,0),
 (0,1,0),
 (0,0,0))

Output:
((0,0,0),
 (0,1,0),
 (0,0,0))

Note:

  • The number of elements of the given matrix will not exceed 10,000.
  • There are at least one 0 in the given matrix.
  • The cells are adjacent in only four directions: up, down, left and right.
class Solution(object):
    def updateMatrix(self, matrix):
        if not matrix or not matrix(0):
            return ()
        m, n = len(matrix), len(matrix(0))
        op = ((-1 for _ in range(n)) for _ in range(m))
        directions = ((1,0), (-1,0), (0, 1), (0, -1))
        def dfs(i,j):
            if matrix(i)(j) == 0:
                return 0

            if op(i)(j) != -1:
                return op(i)(j)

            matrix(i)(j) = -1
            closest_zero = float('inf')
            for direction in directions:
                x,y = direction(0) + i , direction(1) + j
                if 0 <= x < m and 0 <= y < n and matrix(x)(y) != -1:
                    closest_zero = min(dfs(x,y), closest_zero)
            closest_zero += 1
            matrix(i)(j) = 1
            return closest_zero

        for i in range(m):
            for j in range(n):
                if matrix(i)(j) == 1 and op(i)(j) == -1:
                    op(i)(j) = dfs(i,j)
                elif matrix(i)(j) == 0:
                    op(i)(j) = 0
        return op

It is running too slowly and I don’t understand what is the reason for that. Most optimised solution have solved this using BFS.

complexity theory – What is the difference between saying there is no ϵ>0 such that a problem can be solved in $O(n^{2-epsilon})$ time and $n^{2-o(1)}$ or $Omega(n^2)$?

It is conjectured that 3SUM cannot be solved in time $O(n^{2-epsilon})$ for any $epsilon > 0$; equivalently, it requires time $n^{2-o(1)}$. This is not the same as the stronger conjecture that 3SUM requires time $Omega(n^2)$. Indeed, the latter conjecture (which was the original form of the 3SUM conjecture) is false: Grønlund and Pettie came up with an $O(n^2/(log n/loglog n)^{3/2})$ algorithm (there were improvements since).

The statement “problem X requires time $n^{2-o(1)}$” states that there exists a function $f(n) = o(1)$ such that any algorithm for X runs in time at least $n^{2-f(n)}$. In particular, there is no $O(n^{2-epsilon})$ algorithm, for any $epsilon > 0$. The converse should also hold, I believe – could be a fun exercise to show.

microsoft excel – Error Function “IF,SUMIF”. Please Help Solved This Function

Can you help solve my problem with this function:

=IF(AND(C6=1,SUMIF(‘WIP-DM’!$A:$A,TEXT(B$6,”yyyymm”),’WIP-DM’!$K:$K),AND(C6=2,SUMIF(‘WIP-DL’!$A:$A,TEXT(B6,”yyyymm”),’WIP-DL’!$K:$K)),AND(C6=3,SUMIF(‘WIP-OH’!$A:$A,TEXT(B6,”yyyymm”),’WIP-OH’!$K:$K))),0)

I don’t know what’s wrong with this function. Please help

Solved: Problem add condition showing only show category and attribute sets? Im stuck

As you can see my add conditions is only showing category and attribute sets.

Thanks in advance.

add conditions

Solved. I didnt know was that simple. The show in promotion in all my attributes was off. need more sleep.

A geometry problem solved by Manjul Bhargava

Manjul Bhargava found at age 8 that a pyramid can contain maximum $ frac{1}{6}n(n+1)(n+2)$ number of orange how does he get it?

abstract algebra – Proof in the context of Galois theory that quintics and above can’t be solved by the trigonometric and exponential function

I have been studying Galois theory as of late and recently I came across this proof in the context of Complex Analysis which not only prove the unsolvability of the general quintic and higher degree polynomials using radicals but also using trigonometric and exponential functions.

Is there a proof in the context of Abstract Algebra/ Galois theory that is equivalent? Preferably this proof should follow the same line of reasoning as the Abel-Ruffini theorem (the version that I found on Wikipedia specifically).

Equation with logarithms, why it can’t be solved?

I’m trying to find the -3 dB frequency of a filter in Mathematica. Yet, my code doesn’t seem to work as it produces imaginary values for the frequency.

Tfilter=(9.36*^6 + 44226.*s + 9.477*s^2)/(2.35521*^9 + 495801.*s + 9.477*s^2)
Eqn1 = 20*Log(10, Abs(Tfilter /. s -> I*2*Pi*f)) == -3
Solve(Eqn1, f)

What am I doing wrong? Is there any additional command I shall give to Mathematica?