graphics3d – Minimum distance between a set of vertices in a 7 dimensional Hypercube

So I have eight 7 bit strings and I need to label it as the vertices of the 7-cube. Then I need to find two complementary vertices(two strings whose corresponding bits are inverted) and a bipartition of the eight strings and assign each set to one of the complementary vertices such that the sum of the number of edges between the vertex and the vertices of the set it was assigned to should be minimum.

For example, Let {a,b,c,d,e,f,g,h} be the set of 7 bit strings. Let X and X’ be the complementary vertices. Partition the set as {a,d,e} and {b,c,f,g,h}. Assign X to the first set and X’ to the next. Then the value would be the sum of the edges between X-a, X-d, X-e, X’-b, X’-c, X’-f, X’-g, X’-h. We need to minimize this value by optimizing the partition and the two complementary vertices. Note that the partition need not be as given in this example.

My vertices are:

  1. 0000000
  2. 0010111
  3. 0101101
  4. 0111010
  5. 1001011
  6. 1011100
  7. 1100110
  8. 1110001

Lower-bound for distance between origin and polyhedron

Let $x_1,ldots,x_n in mathbb R^d$ (assumed to be linearly independent) and let $y_1,ldots,y_n in mathbb {-1,+1}$. Define $Delta ge 0$ by

$$
Delta := inf{|w|^2 mid w in mathbb R^d,;b in mathbb R,;y_i(w^top x_i + b) > 0;forall i},
$$

with the convention that $inf emptyset = +infty$.

Question. As a function of the $x_i$‘s and the $y_i$‘s, what is a good lower bound for $Delta$ ?

genetic algorithms – Travelling Salesman Problem: Distance between solutions

I’m designing a genetic algorithm to solve the travelling salesman problem. So far, I’ve gotten fairly good results. I’m now trying to improve on them by implementing some sort of diversification scheme (like fitness sharing and crowding), although I’m struggling with the conceptualisation of the inter-solution distance a bit.

Solutions represent a path that goes through all cities, i.e. a permutation of the order in which they are visited. This is represented in my code by np.arrays. If I want to know how similar two solutions are, I basically want to find the distance between two permutations of n_cities elements. I currently have two ideas for the distance function.

  1. Levenshtein distance, which is simply ‘how many atomic edits are two sequences removed from each other.
  2. Hamming distance, which denotes the number of positions that are the same.

Note that, for each solution, I make sure to cycle it so it starts in the same position (city). Otherwise these metrics won’t make sense.

Which of them is more appropriate? Is there a better solution? I’ve browsed a number of articles, but haven’t really found an answer yet.

algorithms – Minimum sum of squared Euclidean distance between two arrays

Question:
Given two sorted sequences in increasing order, $X$ and $Y$. $Y$ is of size $k$ and $X$ is of size $m$.
I would like to find a subset of $X$, $i.e$, $X’$ of size $k$, and considering the following optimization problem:$$d(Y,X’) = sum_{j=1}^{k}(y_{j}-x’_{j})^{2}$$ And $X’$ is a subset of $X$ of size $k$, $y_{j} text{ and } x’_{j}$ is element in $Y$ and $X’$. I would like to find the subset of $X$, to reach the minimum of $d(Y,X’)$.
Note that $X’$ could have $k!$ numbers of arrangements, so its order is totally unknown.


What I have came up with so far:
I would like to approach it using Dynamic Programming, and I think I would first compute the squared distance between each element in $Y$ and $X$, but I’m having trouble in determining what is the subproblem and how to solve ths using DP. Thank you!

digital – Why do photographed objects in the distance appear smaller than they do by naked eye?

There may be some psychological and human perception factors involved, but, fundamentally, objects in photographs are as big as you make them. That may sound a little over-simplified, but, really, it’s all there is to it. If you have an image of a mountain, and you hold it in your arm and look at the real mountain in the distance right next to it, the photographed mountain may look either smaller or bigger depending on a) the size of the print and b) how much of that print is filled by the mountain.

If the mountain is too small, there are basically three ways you can adjust this.

First, you can of course get bigger paper and print bigger. Eventually, your printed mountain will look bigger than the distant one.

But if you’re starting from a mountain which only fills a small percentage of the frame, that might be a very large piece of paper with a lot of stuff you don’t care about around the edges. So, second, you can crop the image and expand. This is exactly the same as making a larger print, except… don’t print the parts you don’t want. Then, you can have a relatively large mountain, without all that wasteful forest and sky around it.

But, your camera might not have captured enough detail for that mountain to look good cropped. You can improve this with a better camera, higher quality lenses, using a tripod, shooting on a day with less haze, and more, but there’s also an easy first step — instead of cropping to get a mountain that fills the frame, do it optically. So, third, use a lens with a narrow field of view to make the mountain fill more of your camera’s sensor.

We call a lens with a narrow field of view a “long” lens — or, often, “telephoto”, although this is not technically correct. A lens with a wide field of view is usually just called “wide” (although you’ll hear “short”, too). And, a lens in between is called “normal” — and that’s partly because these lenses tend to give a field of view which roughly corresponds to human vision when printed about 8×10 and held at a comfortable viewing distance. On a 35mm film camera or a full-frame DSLR, this is around 43mm, give or take — a number corresponding to the diagonal of the sensor, so convert as appropriate for other sensor sizes.

So, to recap, yes, it absolutely has to do with field of view, which corresponds to “zoom” — or at least to focal length. The exact magic number varies based on print size, but overall, match your lens focal length and print size appropriately, and you can make that mountain either bigger or smaller.

For more on the technical details behind this, see

plotting – Using the Line command with a bound on distance between points

Suppose I use the Line command applied to a list (or more accurately apply Graphics(Line(list)) or Graphics3D(Line(list))). Is there some way to only connect points which are less than a certain Euclidean distance in the list?

A silly example:

Take

list1 = Flatten(Map(ReIm, Table((1)*E^({j}*I*Pi/60) + (-2), {j, 0, 120}) ), 1); and list2 = Flatten(Map(ReIm, Table((1)*E^({j + 4}*I*Pi/60) + (2), {j, 0, 120}) ), 1);

and compare the outputs of Graphics(Line(Join(list1, list2))) andShow(Graphics(Line(list1)), Graphics(Line(list2)))

The lists I usually work with would give rise to something like the first command with an “extra” line connecting them but no easy way in general to separate the lists into sublists and apply a command of the second type. However, these types of “extra” lines are much longer than the lines I actually want, namely the lines approximating both circles. Is there some nice way to stop Mathematica joining points which are over some distance, say, $l$ apart?

graphics3d – Get 3D surface grown by constant distance

There is a nice example of how to generate an isometric visualisation of surfaces which are at a constant distance from a given region.

All these examples are just working on one single region. If one tries to apply this functionality to RegionUnions then the evaluation does not yield any results. E.g. a simple union of two cuboids with subsequent plotting of the surface which is 0.25 away from their surfaces yields no result:

ContourPlot3D[Evaluate@RegionDistance[RegionUnion[
   Cuboid[{-5, -5, 0}, {5, 5, 1}], 
   Cuboid[{-10, -10, -10}, {10, 10, 0}]], {x, y, z}], 
   {x, -7, 7}, {y, -7, 7}, {z, 0, 2}, 
   Mesh -> None, Contours -> {0.25}, 
   ContourStyle -> ColorData[94, "ColorList"], Lighting -> "Neutral",
   BoxRatios -> Automatic]

whereas the simpler command with just one cuboid generates the expected result:

ContourPlot3D[Evaluate@RegionDistance[
   Cuboid[{-5, -5, 0}, {5, 5, 1}], {x, y, z}], 
   {x, -7, 7}, {y, -7, 7}, {z, 0, 2}, 
   Mesh -> None, Contours -> {0.25}, 
   ContourStyle -> ColorData[94, "ColorList"], Lighting -> "Neutral",
   BoxRatios -> Automatic]

enter image description here

Potentially the functionality is just limited to single entities and not unions? Any ideas how to achieve this for Region Unions or Intersections?

SAT-Solver algorithm for Levenshtein distance

IS there a SAT-Solver algorithm for Levenshtein distance problem?

Algorithm for cyclic $n$-string Hamming distance with constant sized language $Sigma$

Suppose we are given a language $Sigma$ where, suppose, $|Sigma| = O(1)$. Consider two fixed strings $A, B in Sigma^n$. Define the Hamming metric between these strings as
$$d_{H}(A,B) = sum_{i=1}^n boldsymbol{1}lbrace A(i) neq B(i)rbrace$$
If we define $B^{(k)}$ as the $k$-shift (to the right) cyclic permutation of $B$, then what I am looking to compute is
$$d_{text{cyc},H}(A,B) = min_{k in lbrace 0, cdots, n-1 rbrace} d_Hleft(A, B^{(k)}right)$$
So it is easy to see that we can compute $d_H(A,B)$ for some length $n$ strings $A$ and $B$ in time $O(n)$, implying a trivial $O(n^2)$ algorithm for $d_{text{cyc},H}(A,B)$. So my goal is to see if we can do something better. If someone knows of an algorithm that generalizes to any constant value for $|Sigma|$, I would be happy to know. For now, I will lay out some of my thoughts.


Suppose that $|Sigma| = 2$, namely that $Sigma = lbrace alpha, beta rbrace$. Let us define a map $h: Sigma rightarrow lbrace -1, 1 rbrace$ where, say, $h(alpha) = -1$ and $h(beta) = 1$. If we transform the strings $A$ and $B$ element-wise to strings $A’$ and $B’$ in $lbrace -1, 1rbrace^n$, we can then compute all of the $d_Hleft(A, B^{(k)}right)$ values via a FFT of the concatenated string $B’B’$ and $A’$. We can see this by first considering the computation of $d_H(A,B)$. Suppose $I_{=} subseteq (n)$ is the set of indices for characters where $A$ and $B$ are the same and make $I_{neq} = (n) setminus I_{=}$ the set of indices where $A$ and $B$ differ. Clearly $I_{=}$ and $I_{neq}$ are disjoint, so $|I_{=}| + |I_{neq}| = n$. Now let us compute the inner product of $A’$ and $B’$. Any element where $A$ and $B$ have the same character, $A’$ and $B’$ will have the same sign at that element. Any element where $A$ and $B$ differ, the signs will differ as well. Thus we find that
$$(A’ cdot B’) = sum_{i=1}^n A'(i) B'(i) = sum_{i in I_=} A'(i) B'(i) + sum_{i in I_{neq}} A'(i) B'(i) = |I_=| – |I_{neq}|$$
As $d_H(A,B) = |I_{neq}|$ and $(A’cdot B’) = |I_{=}| – |I_{neq}| = n – 2 |I_{neq}|$, this implies that we can find $d_H(A,B)$ to be equal to
$$d_H(A,B) = |I_{neq}| = frac{1}{2}left(n – (A’ cdot B’)right)$$
Now if $text{rev}(S)$ reverses a string $S$ of size $n$, implying that $S(i) = text{rev}(S)(n-i)$, we can observe that if we define the string $C’ = text{rev}(B’B’)$, we can find for any $k in (n)$ that
begin{align}
v_k &:= sum_{i=1}^n C'((n-k+1)-i)A'(i)\
&= sum_{i=1}^n (B’B’)((k-1) + i)A'(i) \
&= sum_{i=1}^n (B’)^{(k-1)}(i) A'(i) \
&= left((B’)^{(k-1)} cdot A’right) \
&= n – 2 d_Hleft( A, B^{(k-1)} right)
end{align}

This implies doing the convolution of the strings $C’$ and $A’$ give us a mechanism to compute all values for $d_Hleft(A, B^{(k)}right)$, which can be done in $O(n log(n))$ time using the Fast Fourier Transform (FFT). This sounds great for the special case that $|Sigma| = 2$, but I am unsure about an efficient, exact way that generalizes to larger constant values for the size of $Sigma$.

My initial thought as an approximation is to create, say, an $r$-wise independently family of hash functions $mathcal{H} := leftlbrace h: Sigma rightarrow lbrace -1, 1 rbrace ,|, forall c in Sigma, h(c) = 1 text{ with prob } 1/2rightrbrace$ for $r$ at least 2, uniformly sample some $h in mathcal{H}$, and then for a string $A in Sigma^n$ set $A'(i) = h(A(i))$. If we define the random variable $Y(A,B) = A’ cdot B’$ under this type of transformation, we can find that
begin{align}
mathbb{E}left(Y(A,B)right) &= sum_{i=1}^n mathbb{E}left(A'(i)B'(i)right) \
&= sum_{i in I_{=}} mathbb{E}left( A'(i)B'(i)right) + sum_{i in I_{neq}} mathbb{E}left(A'(i)B'(i)right)
end{align}

Consider two characters $a, c in Sigma$. If $a = c$, then $mathbb{E}(h(a) h(c)) = mathbb{E}(h(a)^2) = mathbb{E}(1) = 1$ since $h(a) = h(c)$. If $a neq c$, then $mathbb{E}(h(a) h(c)) = mathbb{E}(h(a)) mathbb{E}(h(c)) = 0$. This result implies that
begin{align}
mathbb{E}left(Y(A,B)right) &= sum_{i in I_{=}} mathbb{E}left( A'(i)B'(i)right) + sum_{i in I_{neq}} mathbb{E}left(A'(i)B'(i)right) \
&= |I_{=}| \
&= n – |I_{neq}|
end{align}

Which means that technically we could use the estimator $hat{d}_H(A,B) = n – Y(A,B)$. Obviously we could then average across $k$ estimates to minimize variance, but at least initial calculations of the variance of this estimator seem to show that the variance satisfies $text{Var}(hat{d}_H(A,B)) = Theta(n^2)$, which kind of makes sense because there are hash functions that could completely get things wrong. Like if we happen to choose a hash function such that $h(c) = 1$ for all $c in Sigma$, then we will get an estimate that the strings are identical even if the strings have no overlap, e.g. $A = aaa$ and $B = bbb$. Thus, this randomized approach does not seem sound. If anyone has ideas of how things could be modified to improve the concentration properties, I would be happy to hear them!

mathematics – Can I get world-space z-near-plane vertices from projection matrix + z-near distance?

I’m trying to port ShaderToy VR shaders to WebXR (the new browser API for AR/VR devices).

ShaderToy VR shaders expect two extra parameters: ray origin (view translation) and ray direction (unit vector from ray origin to each pixel). I could easily calculate this ray direction if I had fovX + fovY + distance to projection plane.

The problem: WebXR does not expose FOVs and instead just provides us with a projection matrix (apparently some devices might need more complex projection matrices with shear, roll, etc.) WebXR also exposes zNear and zFar, but that’s all the frustum info available if I’m not mistaken.

Is there any way to obtain the zNear plane corners in world-space with this information?