np – FIND WORDS in P?

FIND WORDS It is the following decision problem:

Given a list of words L and a matrix SUBWAY, are all the words in L Also in SUBWAY?
The words in SUBWAY can be written from top to bottom, from bottom to top, from left to right, from right to left, diagonal to left, diagonal to left, diagonal to right and diagonal to right.
To be specific, this is the classic game that can be found in the week of the puzzles: FIND WORDS

Now, this decision problem is clearly in public notary
because, given a certificate with the positions of the words in the matrix (indexes), a verifier can verify it in polynomial time.

My question is this: do we know the Turing machines that decide this language in polynomial time?

np complete – Reduction of the simple hamilton cycle

HAMPATH

  • Input: A non-directed graph G and 2 nodes s, t

  • Question: Does G contain a Hamilton route from s to t?

HAMCYCLE

I want to show that HAMCYCLE is NP-hard

I'm going to show this by doing $ HAMPATH leq_p HAMCYCLE $ since it is known that HAMPATH is NP-COMPLETE

The reduction is as follows

$ (G, s, t) a (G & # 39 ;, s & # 39;) $

where $ s & # 39; = s $ and to $ G & # 39; $ I will add a border of $ t $ to $ s & # 39; $

This is polynomial time because we are adding only one advantage

Yes $ (G, s, t) in HAMPATH $, then we know there is a hamilton path from s to t, our graph G & # 39; will be $ (s & # 39 ;, points, t) $ but since we added a border of $ t $ to $ s & # 39; $ so

$ (s & # 39 ;, points, t, s & # 39;) $, a cycle, asi $ (G & # 39 ;, s & # 39;) in HAMCYCLE $

Now doing the opposite, if $ (G & # 39 ;, s & # 39;) in HAMCYCLE $ then there is a hamilton cycle of $ (s & # 39 ;, …, s & # 39;) $ which visits each node and returns to s & # 39 ;, which means that there is a node $ t $ right before $ s $ to make this a hamilton path, therefore $ (G, s, t) in HAMPATH $


Up is all my intent. I was wondering if I could call $ t $ in my reduction since it is not used as an entry in HAMCYCLE?

np: can I calculate the time to find the optimal problem of the travel seller with a super computer and can I know what is the city limit for computers now?

I want to know the real limit of our computing power that we have now
What is the limit of cities that I can reach with an optimal sun?
I think the first computer is 10 ^ 19 process in second

Y
Can I calculate the time it will take looking at the connections of the directed graph?
And I gave the time for this power 10 ^ 19.

How can I edit the real graph of the problem and delete the cities from it?

Theory of complexity: the problems $ mathsf { # P} $ are more difficult than $ mathsf {NP} $ problems

As I suggested in the comments, that its reduction exists is not at all surprising. As in my answer to your previous question, your "Expand and Simplify" part takes a potentially exponential time and, therefore, does not qualify as a polynomial time reduction (which is the standard notion used to compare the classes in question). The exponential time reductions here are irrelevant because both $ mathsf {NP} $ Y $ # mathsf {P} $ are contained in $ mathsf {EXP} $, so the reduction is powerful enough to solve The problems that should be reducing.

To address the question in the title, note the following: $ P in mathsf {NP} $ be some problem $ S_P $ the set of solutions associated with $ P $ (that is, the $ {0,1 } ^ ast times {0,1 } ^ ast $ relationship that, for example, associates each $ x in P $ with its solutions (poly-verifiable)), and $ # S_P colon {0,1 } ^ ast to mathbb {N} _0 $ be the counting problem associated with $ S_P $, that is to say, $ # S_P (x) = | S_P (x) | $. by $ P = mathsf {SAT} $, for example, $ S_P $ it is simply the relationship of the formulas with their satisfactory tasks, and $ # S_P $ The number of satisfactory commitments. Then you can merge $ P $ as the problem (decision) $ {x mid #S_P (x) ge 1 } $. This means having a procedure that counts the solutions (that is, calculates $ # S_P in # mathsf {P} $) gives directly something that decides $ P in mathsf {NP} $.

Not only that, but note that up we do not even use the value $ # S_P (x) $ Apart from proving that it is not zero. We may be able to use that precise value to solve problems much more difficult than $ mathsf {NP} $ (For example, as Toda's theorem tells us, see David Richerby's answer). This gives us enough reasons to believe that the problems in $ # mathsf {P} $ are, in general, much harder than those of $ mathsf {NP} $ (The most conspicuous candidates are the $ # mathsf {P} $– Complete problems.

np hard: NP problem algorithm that can solve more than normal algorithms, will it be P = NP for some cases or limiter?

Assume that all algorithms can find an optimal problem like the problem of travel vendors for 25 cities, which means 25! Possibility in polynomial time with use of supercomputing power.
So if there is an algorithm that can solve for 50 cities, that means 50! possibility
With the use of the power of the supercomputer, this means that it could close the range of many possibilities to a small number that could be solved in 25 cities.

So, what does this algorithm mean, p = np in the limit of 50 cities, or you can simply get a response below the limit, but you can not solve higher numbers like 100! Although I could throw a large number of possibilities from 100!

np complete – Verifying the solution of the Hamiltonian cycle in O (n ^ 2), n is the length of the G coding

In the CLRS textbook, & # 39; ch. 34.2 Polynomial time verification & # 39; says the following:

Suppose a friend tells you that a given
Graph G is Hamiltonian, and then offers to test it by giving the vertices in order along the Hamiltonian cycle. It would certainly be easy enough to verify the test: simply verify that the provided cycle is Hamiltonian by verifying if it is a permutation of the vertices of $ V $ and if each of the consecutive edges throughout the cycle actually exists in the graph. I could certainly implement this verification algorithm to execute on $ O (n ^ 2) $ time, where $ n $ is the length of the coding
of $ G $.

For me, for every consecutive pair. $ (u, v) $ of the given cycle, we could verify if it is an advantage in $ G $. In addition, we could use some color codes for each vertex to make sure we do not revisit a vertex. By doing so, we could verify if the given cycle is Hamiltonian in $ O (E) = O (m ^ 2) $ time where $ m $ is the number of vertices in $ G $. In addition we can see the minimum coding. $ n $ of $ G $ is $ m ^ 2 = n $. A) Yes $ O (E) = O (m ^ 2) = O (n) $. Can anyone help me understand, why is it mentioned as $ O (n ^ 2) $ instead!

np hard – Dubins hardness test TSP NP

In Le Ny et al.paper On the problem of the street vendor of Dubins (https://tinyurl.com/y59f7d8x) the authors prove, among other works, that the problem of the street vendor (DTSP) of Dubins is NP-difficult. I will give here the basic concepts of the test, after which I will ask my question regarding this test.

The authors reduce Exact Cover to DTSP, adapting the Papadimitriou reduction from Exact Cover to Euclidean TSP (ETSP). This is done by first observing how the Papadimitriou test makes it so accurate that Cover has a solution if and only if the optimal ETSP path does not last longer than some $ L $. Le Ny et al. however, keep in mind that if Exact Cover does not support any solution, then the optimal ETSP route has a length of at least $ L + delta $, Not only $ L $, for some defined $ delta $. Next, note that the optimal duration of the DTSP journey has the following relationship with an optimal duration of the ETSP journey: $ DTSP leq ETSP + Cn $, for some $ C $ (tested by Savla et al.). Le Ny et al. then continue to build the ETSP instance of Papadimitriou from a given instance of Exact Cover, in which all distances are then multiplied by $ 2Cn / delta $.
Then, they prove that Exact Cover has a solution if the optimal DTSP path has a length no greater than $ 2CnL / delta + Cn $, while Exact Cover has no solution if the optimal DTSP path has a length of at least $ 2CnL / delta + 2Cn $.

My question now is, why is it necessary to multiply all distances by $ 2Cn / delta $ ?
It seems to me that if you do not multiply the distance by something, the exact coverage will have a solution if the optimal path of DTSP does not have a greater length than $ L + Cn $, and Exact Cover has no solution if the optimal DTSP path has a length of at least $ L + delta + Cn $. What am i missing here?

How to get to complete NP problems and NP problems?

Therefore, to my understanding, NP-Complete is when there is no known algorithm to solve it. Take the clique, for example, is a known problem of NP Complete with information

clique

input: an unmanaged graph $ G $ and a natural number $ k $

question: does $ G $ have a clique of size $ k $?


How can I make a turn to this so that I know that it is still complete NP?

np complete – Reduces the sum of the subset to 3SAT

Generally, there are no intuitive or illustrative reductions between problems in different problem domains.

The proof that 3SAT is public notary-complete is essentially writing a formula that says "This public notary The Turing machine accepts this entry. "For other problems about logical formulas, you can often translate the formula into a 3SAT instance, sometimes you can express problems in other domains like 3CNF formulas: for example, you can make 3 colors with variables for each vertex combination. $ v $ it has color $ q $"and write a formula that says that each vertex has exactly one color and the adjacent vertices have different colors.

However, in general, when faced with a problem of a completely different domain, it can not do much better than saying that the sum of subsets is in public notary so it's decided by a Turing machine that I can express as an instance of 3SAT. Maybe you can get to a less generic reduction, but it probably would not teach you anything.

np hard – How to determine the complexity of a mixed strategy NASH equilibrium problem

How to determine a NASH equilibrium problem of complete information strategy with finite numbers of players and strategies? That is, there is a payment matrix that shows all the relationships between all the profiles of the player. For example, 5 players have 5 strategies, and everyone knows the results for all combinations of strategies 5 * 5 * 5 * 5 * 5.

Another question: if I concatenate all the KKT conditions of the player to solve this problem, what is the complexity of the problem?