complexity theory – k disjoint triangles with graph splitting to 2 distinct groups

Please note that this question is different than this question

The $k$-disjoint triangles problem is as follows:

Input: A graph $G=(V,E)$ and an integer $kin mathbb{N}$

Output: Are there k vertex-disjoint triangles in G?

An FPT algorithm is presented here (starting from slide 60). The algorithm uses color-coding and relies on dynamic programming to determine if a solution is highlighted (each vertex in the solution group is colored with a distinct color). The running time of the algorithm is $O^{*}(2e^{3k})$.

Now, Let’s assume that we get a group $X subseteq V$ of vertices, and the problem changes: Are there k vertex-disjoint triangles in G where one vertex is from $X$ and the other two are from $V setminus X$?

I need to find an algorithm whose running time is $O^{*}(2e^{2k})$. I tried coloring all vertices from $X$ in a single color, but I couldn’t find a way to avoid the duplicate choices of vertices from $X$. I also try to color each vertex from $X$ in a distinct color that is different from the colors of $Vsetminus X$ but the running time is higher.

Can you propose a coloring method that will highlight a possible solution within the required complexity? Should I try something else?

algorithm analysis – Why do researchers only count the number of multiplications when analyse the time complexity of Matrix Multiplication?

It looks like the article was written by someone who does not understand matrix multiplication.

the number of additions is equal to the number of entries in the matrix, so four for the two-by-two matrices and 16 for the four-by-four matrices.

With the classic matrix multiplication algorithm (which is the one explained in the example) between two $4times 4$ matrices, each coefficient of the product requires $3$ additions, so a total of $3times 16 = 48$ additions, not $16$.

Usually, when it is said that only multiplications matter and not additions, it means of matrices, not coefficients.

For example, a “naive” divide-and-conquer strategy compute $Atimes B$ where $A,B$ are matrices of size $ntimes n$ by doing $8$ products of matrices of size $frac{n}{2}timesfrac{n}{2}$ (and some additions of matrices, but those are done in complexity $O(n^2)$, which is negligible in front of the complexity of matrix multiplication). That way, the complexity verifies $C(n) = 8Cleft(frac{n}{2}right) + O(n^2)$ and it is easily proven that $C(n) = O(n^3)$, so this strategy is not an improvement.

The Strassen algorithm use a divide-and-conquer strategy to improve complexity: it does $7$ products of matrices of size $frac{n}{2}timesfrac{n}{2}$ and some additions, so the complexity verifies $C(n) = 7Cleft(frac{n}{2}right) + O(n^2)$ and we get $C(n)simeq O(n^{2.8})$.

In the two examples above, the number of matrices multiplications matter much more than the number of matrices additions. But in both, the number of multiplications/additions on coefficients is not compared.

It is confirmed in the article:

Volker Strassen reportedly set out to prove that there was no way to multiply two-by-two matrices using fewer than eight multiplications. Apparently he couldn’t find the proof, and after a while he realized why: There’s actually a way to do it with seven!

complexity theory – Consequence of NP-complete, and DP-complete w.r.t. randomized reductions

If a problem is NP-complete with respect to randomized (polynomial time) reductions, but not with respect to deterministic reductions, then we have P $neq$ BPP (See Question 2 here and its answer).

Suppose a decision problem is proved to be NP-complete; and it is also proved to be DP-complete with respect to randomized reductions. Does this have any real consequence?

Context
Given a graph $G$, it is coNP-complete to test whether $G$ has exactly one 3-colouring upto swapping of colours (because the “another solution” problem associated with 3-colouring is NP-complete (1)). The same problem is DP-complete with respect to randomized reductions (2).

(1) Dailey, David P., Uniqueness of colorability and colorability of planar 4-regular graphs are NP-complete, Discrete Math. 30, 289-293 (1980). ZBL0448.05030.

(2) Barbanchon, Régis, On unique graph 3-colorability and parsimonious reductions in the plane, Theor. Comput. Sci. 319, No. 1-3, 455-482 (2004). ZBL1043.05043.

computational complexity – Does the function n(log(n))^100 have a bigger growth rate than the function n^2?

I am currently studying complexity in my CS class, and am stuck on a question that requires me to order functions according to growth rate. I am stuck on the function n(log(n))^100, because although nlogn has a lower growth rate than n^2, I have heard that n(log(n))^k, where k is any positive number, can have a very big growth rate depending on the value of k. My question is, does this function grow quicker than n^2?

data structures – Time complexity of algorithms


Your privacy


By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.




nt.number theory – Complexity of a Diophantine equation having $leq1$ solutions

We are provided a single Diophantine equation
$$f(x_1,dots,x_n)=0$$
having degree $geq2$ and having the promise it has $leq1$ solutions in the set ${0,1}^n$ and $t$ is the number of terms in the polynomial.

We are to decide if $$|{(x_1,dots,x_n)in{0,1}^n:f(x_1,dots,x_n)=0}|>0?$$

Is there a $mathsf{poly}(nt)$ algorithm for the problem?

algorithms – Time complexity of finding median in data stream

I was reading a solution to the problem in the title on leetcode and the article says that the time complexity of the following solution is O(n)

  1. setup a data structure to hold stream value and insert new element in the right place using linear search or binary search
  2. return median

my solution is as follows:

class MedianFinder:

    def __init__(self):
        """
        initialize your data structure here.
        """
        self.arr = ()
        

    def addNum(self, num: int) -> None:
        idx = bisect.bisect_left(self.arr, num)
        self.arr.insert(idx, num)

    def findMedian(self) -> float:
        # self.arr.sort()
        if len(self.arr) % 2 != 0:
            return self.arr(len(self.arr)//2)
        else:
            return (self.arr(len(self.arr)//2 -1) + self.arr(len(self.arr)//2 ))/2

My question is about the time complexity of the push method. the binary search will take O(log n) to find the index. the insert will take O(n). but since the method will be called on the stream, will the complexity be O(n^2) or O(n) ?

enter image description here

Understanding of TIME COMPLEXITY in non deterministic and deterministic Turing Machines

If I assume that the complexity of an non-deterministic Turing machine N is T(n), where |w| = n, w: input string for N. What would be the time complexity of a deterministic Turing machine D equivalent to N?

logic – Complexity of pattern matching for modus ponens logical conclusions

Is a Turing machine with added the following contant-time operation equivalent (in the sense that polynomial time remains polynomial time and exponential time remains exponential time) to a (usual) Turing machine:

By predicates I will mean predicates in first-order predicate calculus. (Note that predicates may have free variables.)

  • constant-time modus-ponens resolution (yes or no) and then adding $y$ to the end of this array if yes, for given predicates $x$ and $y$ and an array (or a linked list) of predicates. By definition of modus ponens, it’s yes, if and only if some element of the arrays is $XRightarrow y$ where $X$ is a pattern matching $x$.

Remark: The above operation is a part of the standard procedure of proof-checking is first-order predicate logic.

If the above hypothesis is false, then what is the running time upped bounds of the above operation in different kinds of Turning machine equivalents (such as Turing machine, Markov algorithms, von Neumann architecture with infinitely many infinitely big words of memory, etc.)?

BTW, is von Neumann architecture with infinitely many infinitely big words of memory a Turning machine equivalent? (I think yes, but not 100% sure.)

complexity theory – Constant-time adding an element?

Is a computer with infinite memory and infinite word size a Truing machine equivalent (in the sense that polynomial time remains polynomial time and exponential time remains exponential time) if we allow constant-time linked link element insertion (at the beginning of the list)?

I doubt this because element insertion requires memory allocation and allocation is usually not a constant-time operation.