algorithms – Multiplication mod 2 without additional registers

For a given bit string $ (x_1, x_2, ldots, x_n) $ and a $ n times n $ matrix $ M $ (with binary values), I would like to calculate your product module $ 2 $,
$$
begin {pmatrix}
y_1 \ y_2 \ y_3 \ ldots \ y_n
end {pmatrix}
= M
begin {pmatrix}
x_1 \ x_2 \ x_3 \ ldots \ x_n
end {pmatrix} mod 2 ,
$$

without using any additional registration and just using $ NO $ Y $ XOR $ gates Is it possible to build such a circuit with the sub-exponential number of gates?

The lower limit is trivially given by $ O (n ^ 2) $ operations (This is how you would normally multiply matrices, if you had access to the original values ​​of the registers all the time. The question, however, is inspired by quantum computation, where one cannot store the initial values, and additional qubits They are expensive).

the $ O ( exp (n)) $ The solution is given by recursively fixing all the changes made to the bit that is currently added during multiplication.

algorithms – Reduction of Kleene's predecessor for Church numbers

I am trying to "reinvent" Kleene's predecessor. The following code snippet should be self explanatory. The idea is to make a tuple of 2 and count from scratch, that is lambda f: lambda x: x, as described in this article:

#!/usr/bin/env python3

NULL = lambda x: x
ZERO = lambda f: lambda x: x

TRUE = lambda T: lambda F: T(NULL)
FALSE = lambda T: lambda F: F(NULL)
IF_ELSE = lambda cond: lambda T: lambda F: cond(T)(F)
IS_ZERO = lambda n: n(lambda _: FALSE)(TRUE)

ADD1 = lambda n: lambda f: lambda x: f(n(f)(x))

MakePair = lambda first: lambda second: lambda cond: IF_ELSE(cond)(lambda x: first)(lambda x: second)
First = lambda pair: pair(TRUE)
Second = lambda pair: pair(FALSE)
Trans = lambda pair: lambda cond: IF_ELSE(cond)(lambda x: Second(pair))(lambda x: ADD1(Second(pair)))
SUB1 = lambda n: First(n(Trans)(MakePair(ZERO)(ZERO)))

THREE = ADD1(ADD1(ADD1(ZERO)))
FIVE = ADD1(ADD1(ADD1(ADD1(ADD1(ZERO)))))

if __name__ == '__main__':
    print(SUB1(THREE)(lambda x: x + 1)(0))
    print(SUB1(FIVE)(lambda x: x + 1)(0))

In the end, the linked article states that

So it is simple but tedious to expand all the short expressions of the previous hand and reduce the resulting expression to the normal form. This results in the standard magic encoding of the predecessor.

I guess the normal form of Kleene's predecessor looks like this:

pred = lambda n: lambda f: lambda x: n (lambda g: lambda h: h (g (f))) (lambda y: x) (lambda x: x)

However, after applying a series of expansion and $ beta $-reduction, I'm done with this:

SUB1 = lambda n: n(lambda pair: lambda cond: cond(lambda x: pair(lambda T: lambda F: F(NULL)))(lambda x: lambda f: lambda x: f((pair(lambda T: lambda F: F(NULL)))(f)(x))))(lambda _: lambda f: lambda x: x)(lambda T: lambda F: T(lambda x: x))

Question:

How do i reduce my SUB1 function for pred? I don't think we can go any further just $ beta $-reduction, and there must be some advanced reduction techniques unknown to me.

A step by step solution would be highly appreciated. Please note that this is not a homework problem; I am doing the exercise just for fun.

algorithms: options to address the problem of stable marriage with unequally sized item sets / preferences

I'm looking for an algorithm / code that provides a stable match for two sets of unequally sized items (clubs and students) with an uneven set of preferences. There is a large group of students looking to join a club, and a relatively small group of clubs for those students to join. Each student ranks only the clubs they want to join in order of preference. In other words, each student will not be required to rank each available club. At the same time, each club has a maximum number of students that can accept and this number differs in each club. Therefore, both groups for the algorithm are unevenly sized, and each item within those groups (i.e. each club and student) could have a different number of preferences.

I have examined the Gale-Shapley algorithm and match without envy, but have not come across any code that provides stable match when there is so much variation in items / preferences. Does anyone know of any code that can accomplish this (preferably something like Python or Java)?

algorithms – how to find all the contiguous subsequence of a matrix is ​​less than O (N ^ 2) time complexity?

I can't find the product of each contiguous subsequence of a matrix in less than O (N ^ 2) time complexity.

 ```
 public class PrintAllSubArrays {

public void printSubArrays(int () arrA){

    int arrSize = arrA.length;
    //start point
    for (int startPoint = 0; startPoint 

}
`` ''

algorithms – NP problem solving: analogy between the SAT problem and the shortest path problem

in this 2 minute video https://www.youtube.com/watch?v=TJ49N6WvT8M (excerpted from a free udacity course on algorithms / theoretical computer science),

whose purpose is to show how a SAT problem can be solved as it would be the shortest path problem,

I understand that

  • exist n route patterns like n different boolean variables that belong to a "AND group of clauses "." Clause "be" a OR group "made up of conditions on some of the n variables

    example: of a SAT problem being "find values ​​of x1, x2, x3 such that clause1 AND clause2 AND clause3 It is true", being clause1 x1 OR x2 (it is true), being clause 2 x1 OR NOT(x3)etc.
  • m vertices added to force the m clauses to be ensured by having a unique "local" shortest path (within a local pattern)

But What I do not understand that's why every pattern has to have 3(m+1) vertices? Why are 2 diamonds per pattern not enough?

Thanks for enlightening me

Root Properties of Recurrence Relationships in the Context of Exponential Algorithms to Lower the Upper Limit of Runtime

The book "Exact Exponential Algorithms" by Fedor V. Fomin and Dieter Kratsch is an excellent book to start learning how to design exact exponential algorithms. In their second chapter, they present the recurrence relationships in the context of a branching algorithm:

$ T (n) leq T (n-t_1) + T (n-t_2) + points T (n-t_r) $

To solve this equation we assume $ T (n) = c ^ n $, then $ c $ must be a (complex) root of $ x ^ n – x ^ {n-t_1} -x ^ {n-t_2} – dots-x ^ {n-t_r} = 0 $ -> The execution time of this branching algorithm is governed by the largest (real) root of this equation. We call $ tau (t_1, t_2, dots, t_r) $ the branching factor of these $ t_r $ numbers, which is the largest positive real root of the corresponding equation. The branch vector is defined as $ t = (t_1, t_2, dots, t_r) $

I am mainly interested in editing such branching algorithms to reduce runtime. Often, I find myself in a situation where I can do:

(1) remove a branch entirely (i.e. reduce the number of arguments in the branch factor)

(2) you can choose between two branch vectors, let's say $ x $ Y $ and $. The branching vectors are the same except for two elements: we have $ x_i <y_i $ Y $ x_j> y_j $ with $ i neq j $. ($ x_i $ it is therefore an element in position $ i $ branching vector $ x $)

I have two questions:

(1) Intuitively, it seems to me that if I remove a branch entirely, this reduces the execution time of the algorithm. Basically we try to find the number of leaves of the branching algorithm; by definition, if we remove an entire branch, the number of sheets should decrease, therefore the execution time should be less. However, I cannot find such a theorem / proof in the literature.

Mathematically, you essentially have an equation of the form $ sum a_i x ^ {n – c_i} = 0 $, where we only care about the largest positive real root. Now I do create the same equation, but now one of the $ a_i $ values, does my positive real root decrease? I suppose this is a fundamental / elementary theorem somewhere, but I can't seem to find it.

(2) The book has a Motto (2.3): $ tau (i, j) ge tau (i + epsilon, j – epsilon) $ for all $ 0 le i le j $ and all $ 0 le epsilon le frac {j-i} {2} $. I cannot use this motto in most cases of this problem. I will assume that I will have to calculate the roots of each branch, right? And then take the minimum? Or is there a more fundamental theorem for this?

GOAL: Should I (cross) post this on TCS?

algorithms – rolling running numbers

I am numbering generated files with two digits 00-99 and I want to retain the last 50.
The algorithm should tell me the number of the current file I am about to save and which of the previous files to delete.

If I reduce it to a one digit toy version, this is how I want it to behave.

  1. As long as there are less than 5 files, we can only add files.
    This is represented in the first block below.
  2. Once there are 5 previous files, the oldest one will be deleted.
    This is trivial as long as we have not reached the maximum number, here 9. This is represented in the second block below.
  3. Once the highest numbers have been reached, the numbering should turn, and this is where things get tough. See the third block below.
__prev__    __curr__   __drop__  
     0         null
0              1         null
0 1            2         null
0 1 2          3         null
0 1 2 3        4         null

0 1 2 3 4      5          0 
1 2 3 4 5      6          1
2 3 4 5 6      7          2
3 4 5 6 7      8          3
4 5 6 7 8      9          4

5 6 7 8 9      0          5
6 7 8 9 0      1          6
7 8 9 0 1      2          7
8 9 0 1 2      3          8
9 0 1 2 3      4          9

Some pseudocode to illustrate my progress:

if length(prev) < 5
   curr = length(prev)
   drop = null

else

if 9 not in prev
   curr = (1+max(prev)) mod(10)
   drop = curr-5

if 9 in prev

... and this is where I am trapped. I have experimented by subtracting 10 from each item in prev.

I'm sure it's trivial and will hit me when I least expect it. I also have a lingering feeling. I will find an expression that generalizes both cases (with and without max in prev)

algorithms: list the terms resulting from the decomposition of a number by repeated divisions by 2

Consider a natural number $ n> 1 $. We express it as $ lfloor frac n 2 rfloor + lceil frac n 2 rceil $. We repeat the process for each of the two terms until all terms are 1 or 2. For example $ 9 = 4 + 5 = 2 + 2 + 2 + 3 = 1 + 1 + 1 + 1 + 1 + 1 + 1 + 2 $.

There will be $ 2 ^ { lfloor log_2 n rfloor} $ terms because decomposition forms a full height binary tree $ lfloor log_2 n rfloor $.

I am looking for an iterative form of this recursive process. Enumeration $ a_0 = 0, a_ {i + 1} = left lfloor frac {(i + 1) cdot n} {2 ^ { lfloor log_2 n rfloor}} right rfloor – a_i $ it approaches because it meets the following conditions: (a) each term is 1 or 2; (b) the sum of the first $ 2 ^ { lfloor log_2 n rfloor} $ the terms are $ n $. But the elements are not identical to the recursive decomposition form.

Any help will be welcome. Thank you!

algorithms: how to interpolate between the change in value and the constant?

I have a slider that goes from 0 to 100 and as the slider increases it needs to change a float from its current value to a target value, so when the slider is 100 it will be at the target value.

Added difficulty, there is no way to store the original starting value, the equation is used as an expression that is rerun every time the slider is moved, so you can only retrieve the current float value.

Thanks for any help!

How should I do this algorithm? / What algorithms are available to schedule tasks?

As part of a school assessment task, I must create a solution that incorporates a complex algorithm.

I have an idea for the algorithm and it works like this; A complex algorithm that takes the tasks to be completed for the week and breaks them down into time intervals for the week, taking into account their critical status, when they are due, the duration of the tasks, etc. It should also have an alarm system, reminders, and also a progress tracker.

So I want to know how I could do an algorithm that decides where to place the tasks that will be completed at a certain time.

It would be appreciated if you could help.

Thank you