c++ – Is a 32 bits game slower compared to 64 bits?

I was developing a graphics engine from scratch using Direct3D11 and some APIs, but I stumbled in a situation where it would benefical to me to use a certain library, but it’s binaries are only available on 32 bits. I’ve changed my engine to 32 bits and didn’t see much difference, would it be troublesome to use it on 32 bits?

cryptography – What algorithm is prefered to do a x b mod P with big numbers (256 bits)

I’m trying to implement multiple precision arithmetic operations modulo P, with P < 2^256.
More specifically, P = 2^256 - 2^32 - 977.

I want to support the following operations: +, -, *, /, pow (each mod P)

As P is close to 2^256, numbers are represented with 8 u32 or 4 u64.

a + b mod P can be done like this (in pseudo code):

n = a + b
if overflow: # i.e. over 2^256
    # add 2^256 - P to come back modulo P
    n += 2**32 + 977
else:
    if n >= P:
        # P <= n <= 2^256
        n -= P

For a * b mod P, my first intention was to simply do a long multiplication but that seems slow as I would need the carry to be 256 bits as well.

Are there any recommended algorithms to calculate a * b modulo P efficiently (using arrays of u32 / u64)?

I’m mostly interested in the multiplication because:

  • a^x mod P can be an optimized version of a * a * ... * a mod P
  • a / b mod P can be calculated as a * b^{P-2} using fermats little theorem

Note: Bitcoin implements these operations with numbers represented with 10 x uint26 instead of 8 uint32 so each “digit” keeps 6 bits but I’m not familiar with their methods.

programming languages – How do computers perform operations on numbers that are larger than 64 bits?

There are many reasons why numbers larger than 64 bits must be computed. For example, cryptographic algorithms usually have to perform operations on numbers that are 256 bits or even larger in some cases. However, the programming languages that I use can only handle at maximum, 64 bit integers, so how do computers perform operations on numbers that are larger than 64 bits in size and which programming languages support computation of these larger numbers?

one time password – How many bits of security does a hash as a verifyer provide?

The security of this approach is 2^256, or the entropy of the input, whichever is smaller. The preimage security of SHA-256 is 2^256 (note that the attack you linked to is on a reduced-round version, so it’s not applicable to the full SHA-256). However, if your input s contains less than 256 bits of entropy, then it would be easier to search the input domain, and your scheme would have as much security as the entropy in s. That could be the case if you used a 128-bit s, for example.

If s is a 512-bit string and 256 bits are known, then the security is still 2^256, since it has 256 unknown bits.

Windows 7. It keeps turning off as I added bits and wuauserv and wuauserv service is still turning off

it’s me, Mich. I also had a Windows update problem. The problem is: wuauserv service keeps turning off if I tried turning it on. Here is my PCs info:

Windows 7 SP1.

Can I have any solutions?

https://vimeo.com/manage/videos/546572190

Please, mods. Don’t delete this. Look what is the Vimeo is.

memory management – Computer Architecture: How to determine the bits of the address used to access the cache?

Given a non-associative, direct-mapped cache and its cache capacity, block size, and address size, how would I go about determining what bits of the address are used to access to cache? Is there a generalized formula?

If there is a generalized formula, how would that formula change or stay the same for an X-way set associative cache?

How to determine offset bits when addressing CPU cache?

I know that the offset is based off of the line size for a cache. I have seen the example: "32-btye line size would use the last 5-bits (i.e. 25) off the address as the offset into the line" but I do not understand the process used to determine this.

kali linux – Retroingeneria de un ejecutable ELF de 64 bits LSB, x86-64, gdb

Quiero reverse engineer un programo. He conseguido encontrar el punto de entrada pero cada vez que quiero lanzar la aplicación me da el mismo error During startup program exited with code 126.

Aqui esta lo que hicé:

┌──(kali㉿kali)-(~/Documents/Guessy)
└─$ gdb guessy?token=eyJ1c2VyX2lkIjoxNDM4LCJ0ZWFtX2lkIjpudWxsLCJmaWxlX2lkIjoxNjd9.YIyJZA.QQbX2E3vChspI95coiZvSzAwDOo
GNU gdb (Debian 10.1-1.7) 10.1.90.20210103-git
Copyright (C) 2021 Free Software Foundation, Inc.                                                                                                                                                                                            
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from guessy?token=eyJ1c2VyX2lkIjoxNDM4LCJ0ZWFtX2lkIjpudWxsLCJmaWxlX2lkIjoxNjd9.YIyJZA.QQbX2E3vChspI95coiZvSzAwDOo...
(No debugging symbols found in guessy?token=eyJ1c2VyX2lkIjoxNDM4LCJ0ZWFtX2lkIjpudWxsLCJmaWxlX2lkIjoxNjd9.YIyJZA.QQbX2E3vChspI95coiZvSzAwDOo)
(gdb) break 1
No symbol table is loaded.  Use the "file" command.
(gdb) break 0x0000000000006160
Function "0x0000000000006160" not defined.
Make breakpoint pending on future shared library load? (y or (n)) 
(gdb) run
Starting program: /home/kali/Documents/Guessy/guessy?token=eyJ1c2VyX2lkIjoxNDM4LCJ0ZWFtX2lkIjpudWxsLCJmaWxlX2lkIjoxNjd9.YIyJZA.QQbX2E3vChspI95coiZvSzAwDOo 
zsh:1: permission denied: /home/kali/Documents/Guessy/guessy?token=eyJ1c2VyX2lkIjoxNDM4LCJ0ZWFtX2lkIjpudWxsLCJmaWxlX2lkIjoxNjd9.YIyJZA.QQbX2E3vChspI95coiZvSzAwDOo
During startup program exited with code 126.

Encontré el entrypoint con esto:

┌──(kali㉿kali)-(~/Documents/Guessy)
└─$ objdump -f /bin/ls                                                                                                                                                                                                                 130 ⨯

/bin/ls:     file format elf64-x86-64
architecture: i386:x86-64, flags 0x00000150:
HAS_SYMS, DYNAMIC, D_PAGED
start address 0x0000000000006160

hash – Shuffling Bits For Uniform Distribution

I am writing an hashing algorithm to be used in key-value data store. That is for each key the location of the data is determined.

I assume that key would not have more than 64 bits. For uniform distribution in the data store I want key bits to be shuffled appropriately so that all the data need not collide in the same slot.

The approach I am thinking is to have a predefined padding table, from where we take rest of the bits and shuffle whole thing to produce a new key. I am not sure if this is an efficient way to do the job, but exploring should worth it.

If this is a good way, want to understand from you, how can I create such a table? What bits should be filled in it etc. Will be glad if there any pointers.

information theory – Why is “CNOT” gate the only non trivial for two input bits?

Just yesterday, I found the theory about quantum computing and I am studying by myself.
While trying to understand Toffoli gate on wiki (https://en.wikipedia.org/wiki/Toffoli_gate),
I faced the sentence ‘CNOT’ gate is the only non trivial for two input bits
like 00 -> 00, 01 -> 01, 10 -> 11, 11-> 10. At this point,

Question 1

the question popped up that why not 00 -> 01, 01 -> 00, 10 -> 10, 11 -> 11. I think this matrix is presented by
$$
begin{bmatrix}
0 & 1 & 0 & 0 \
1 & 0 & 0 & 0\
0 & 0 & 1 & 0\
0 & 0 & 0 & 1\
end{bmatrix}
quad
$$

is different with $$
begin{bmatrix}
1 & 0 & 0 & 0 \
0 & 1 & 0 & 0\
0 & 0 & 0 & 1\
0 & 0 & 1 & 0\
end{bmatrix}
quad
$$

and is also unitary.

Question 2.

Is the order of the basis matter whenever to present the operation as a matrix? If then, what is the rule?

Question 3.

I am studying with this lecture note- https://homes.cs.washington.edu/~oskin/quantum-notes.pdf
Page 12 of the note, $$ frac{1}{sqrt{2}}(a |0 rangle (|00rangle +|11rangle)+b |1 rangle (|00 rangle + |11 rangle))= frac{1}{sqrt{2}} begin{bmatrix}
a \
0 \
0 \
a \
b \
0 \
0 \
b \
end{bmatrix}
quad
$$

but I think $$ |0 rangle in mathbb{C}^2 $$ and $$ |00 rangle, |11 rangle in mathbb{C}^4$$ so the product of two vector is non-sense. Should I consider it as a tensor product of the two vectors? If we consider it as a tensor product, then it’s okay and the order of basis vectors looks important..