## java – Could you tell me what is wrong with the implementation of my Dijkstra?

The problem is with HackerRank.
First I build a graphic and return a font. `Node`. The objective is to reduce the complexity for greater navigation between vertices, since the use of `In t[][] borders` I do not think it's a clear way.

Then I think I'm still `From Dijkstra` with `PriorityQueue` But it only works in base cases.

``````static int[] shortestReach (final int n, final int[][]    borders, int final s) {
End node source = buildGraph (n, edges, s);
Final map dists = new HashMap <> ();
dists.put (s, 0);
final set visited = new HashSet <> ();
final tail queue = new PriorityQueue <> (Comparator.comparing (v -> dists.getOrDefault (v.val, Integer.MAX_VALUE)));
while (! queue.isEmpty ()) {
node's final node = queue.poll ();
end int dist = dists.get (node.val);
}
}
}
returns buildResult (dists, s, n);
}

static private int[] buildResult (final map dists, int int s, int int n) {
final int[] result = new int[n - 1];
for (int i = 1; i <= n; i ++) {
yes (i! = s) {
final int dist = dists.getOrDefault (i, -1);
yes (i <s) {
result[i - 1] = dist;
} else {
result[i - 2] = dist;
}
}
}
return result
}

Static node private buildGraph (end int n, end int[][]    borders, int final s) {
Final map graph = new HashMap <> ();
for (int i = 1; i <= n; i ++) {
graph.put (i, new node (i));
}
for (int int[] edge: edges) {
final node node1 = graph.get (edge[0]);
end node node2 = graph.get (edge[1]);
final int len ​​= edge[2];
}
return graph.get (s);
}

Private static class node {
int private end val;
Private final map adjs = new HashMap <> ();
Private node (int int final) {
this.val = val;
}
}
``````

Could you clarify what's wrong with my approach?

## Directx: abnormal lighting when calculating in tangent space. Probably something wrong with the coordinate transformation matrix

I'm trying to calculate the lighting in the tangent space. But I'm still getting abnormal results. I was modifying the demo code of the book and I am amused if there is any problem with the transformation matrix that I created.

I'm having trouble solving a problem in Introduction to programming 3D games with DirectX 11. I tried to use the TBN matrix

Tx, Ty, Tz,
Bx, by, Bz,
Nx, Ny, Nz

according to the book provided, but I found that the light vector was erroneously transformed into tangent space and now I have no idea how to debug this shader.

``````float4 PS1 (VertexOut pin,
uniform int gLightCount,
uniform bool gUseTexure,
bool uniform gAlphaClip,
uniform bool gFogEnabled,
uniform bool gReflectionEnabled): SV_Target {
// Normal interpolation can abnormalize it, so normalize it.
pin.NormalW = normalize (pin.NormalW);
pin.TangentW = normalize (pin.TangentW);

// The vector toEye is used in lighting.
float3 toEye = gEyePosW - pin.PosW;

// Caches the distance to the eye from this surface point.
float distToEye = length (toEye);

// Calculate normalMapSample
float3 normalMapSample =
normalize (SampledNormal2Normal (gNormalMap.Sample (samLinear, pin.Tex) .rgb));

// normalize to Eye
toEye = normalize (toEye);

// Default to the multiplicative identity.
float4 texColor = float4 (1, 1, 1, 1);
yes (gUseTexure)
{
// Show the texture.
texColor = gDiffuseMap.Sample (samLinear, pin.Tex);

yes (gAlphaClip)
{
// Discard pixel if texture is alpha < 0.1.  Note that we do this
// test as soon as possible so that we can potentially exit the shader
// early, thereby skipping the rest of the shader code.
clip(texColor.a - 0.1f);
}
}

//
// Lighting.
//

float4 litColor = texColor;
if (gLightCount > 0)
{
float4 ambient = float4 (0.0f, 0.0f, 0.0f, 0.0f);
diffused float4 = float4 (0.0f, 0.0f, 0.0f, 0.0f);
float4 spec = float4 (0.0f, 0.0f, 0.0f, 0.0f);

// Add the contribution of light from each light source.
[unroll]
for (int i = 0; i <gLightCount; ++ i)
{
float4 A, D, S;
ComputeDirectionalLightInTangent (gMaterial, gDirLights[i],
normalMapSample, World2TangentSpace (pin.NormalW, pin.TangentW, gTexTransform), toEye,
A, D, S);

environment + = A;
diffuse + = D;
spec + = S;
}

litColor = texColor * (ambient + diffuse) + spec;

if (gReflectionEnabled)
{
float3 incident = -toEye;
float3 reflectionVector = reflect (incident, normalMapSample);
float4 reflectionColor = gCubeMap.Sample (samLinear, reflectionVector);

litColor + = gMaterial.Reflect * reflectionColor;
}
}

//
// fogging
//

yes (gFogEnabled)
{
float fogLerp = saturate ((distToEye - gFogStart) / gFogRange);

// Mix the color of the fog and the color on.
litColor = lerp (litColor, gFogColor, fogLerp);
}

// Common to take alpha of diffuse material and texture.
litColor.a = gMaterial.Diffuse.a * texColor.a;

return litColor;
}
``````

And here are the functions. SampledNormal2Normal, World2TangentSpace Y ComputeDirectionalLightInTangent:

``````float3 SampledNormal2Normal (float3 sampledNormal)
{
float3 normalT = 2.0f * sampledNormal - 1.0f;
return normalT;
}

float3x3 World2TangentSpace (float3 unitNormalW, float3 tangentW, float4x4 texTransform)
{
// Build orthonormal bases.
float3 N = unitNormalW;
float3 T = normalize (tangentW - point (tangentW, N) * N);
float3 B = cross (N, T);

float3x3 TBN = float3x3 (T, B, N);
/ * float3x3 invTBN = float3x3 (T.x, T.y, T.z, B.x, B.y, B.z, N.x, N.y, N.z);
returns invTBN; * /

float3 T_ = T - point (N, T) * N;
float3 B_ = B - point (N, B) * N - (point (T_, B) * T_) / point (T_, T_);
float3x3 invT_B_N = float3x3 (T_.x, T_.y, T_.z, B_.x, B_.y, B_.z, N.x, N.y, N.z);
returns invT_B_N;
}

void ComputeDirectionalLightInTangent (Material mat, DirectionalLight L,
float3 normalT, float3x3 toTS, float3 toEye,
outside float4 environment,
outside float4 diffuse,
outside float4 spec)
{
// Initialize outputs.
environment = float4 (0.0f, 0.0f, 0.0f, 0.0f);
diffuse = float4 (0.0f, 0.0f, 0.0f, 0.0f);
spec = float4 (0.0f, 0.0f, 0.0f, 0.0f);

// The light vector points in the opposite direction to the direction in which the light rays travel.
float3 lightVec = -L.Direction;
lightVec = mul (lightVec, toTS);
lightVec = normalize (lightVec);

// toEye to Tangent Space
toEye = mul (toEye, toTS);
toEye = normalize (toEye);

ambient = mat.Ambient * L.Ambient;

// Add fuzzy and specular term, provided the surface is in
// The line of light site.

float diffuseFactor = dot (lightVec, normalT);

// Flatten to avoid dynamic branching.
[flatten]
if (diffuseFactor> 0.0f)
{
float3 v = mirror (-lightVec, normalT);
float specFactor = pow (max (point (v, toEye), 0.0f), mat.Specular.w);

diffuse = diffuse Factor * mat.Diffuse * L.Diffuse;
spec = specFactor * mat.Specular * L.Specular;
}
}
``````

The result I got seems to be much darker in most places and too bright in several highlighted areas. I wonder if someone can help me with my code or give me advice on how to debug a hlsl shader. Thank you!

## WordPress detects the wrong time zone

Both the local time and the time zone of my server are correct (UTC -3).

``````\$ timedatectl
Local time: Fri. 2019-05-17 19:41:59 -03
Universal time: Friday 05/20/05 22:41:59 UTC
RTC Time: Friday 201-09-17 22:41:58
Time zone: America / Sao_Paulo (-03, -0300)
Network time in: yes
Synchronized NTP: yes
RTC in local TZ: no
``````

However, WordPress thinks that my server is in the UTC time zone and redials 3 hours.

Time zone

Choose a city in the same time zone as you or a UTC time zone
make up for.

Universal time (UTC) is `2019-05-17 19:38:06`. Local time is `2019-05-17 16:38:06`

This time zone is currently in standard time. Summer schedule
it begins at `November 2, 2019 11:00 pm`.

How can i fix this?

## Installation: The wrong version of Drush is installed when you try to install the site locally

I'm trying to install Drush 8.1.17 locally using the Drupals documentation itself (enter the description of the link here), but every time I try to install it, 9.2.6 is installed in its place. Worldwide, install the correct Drush but not the site locally for a specific project. Here is my intent:

Worldwide

SITE LOCALLY

Any reason why this is happening, or …?

## mp.mathematical physics – What is wrong with this application of the parameter variation method?

Consider a non-linear EDO

$$y ^ { prime prime} left (x right) + a_1 and ^ { prime} left (x right) + a_0 and left (x right) = sum_ {i, j = 0 } ^ {2} b_ {ij} frac {dy ^ i} {dx ^ i} frac {dy ^ j} {dx ^ j}$$.

Suspicious solution for a second order expansion of $$y left (x right)$$ in terms of parameters $$c_1$$ Y $$c_2$$

Let us suppose that the solution of its linearization.

$$y ^ { prime prime} left (x right) + a_1 and ^ { prime} left (x right) + a_0 and left (x right) = 0$$

It is already known and is

$$y_ mathrm {linear} left (x right) = c_1 y_1 left (x right) + c_2 y_2 left (x right)$$,

where $$c_1$$ can be the initial position coordinate and $$c_2$$ can be the initial velocity if this is a movement equation.

Then we define $$F left (x right)$$ such that

$$y ^ { prime prime} left (x right) + a_1 and ^ { prime} left (x right) + a_0 and left (x right) = sum_ {i, j = 0 } ^ {2} b_ {ij} frac {dy ^ i} {dx ^ i} frac {dy ^ j} {dx ^ j} = F left (x right)$$

and solve the equation

$$y ^ { prime prime} left (x right) + a_1 and ^ { prime} left (x right) + a_0 and left (x right) = F left (x right)$$

Using the method of parameter variations, obtaining a solution with integrals that include $$F left (x right)$$ in its integrands.

Now we insert the solution of the linearized problem.

$$y_ mathrm {linear} left (x right) = c_1 y_1 left (x right) + c_2 y_2 left (x right)$$

in non-linear expressions

$$sum_ {i, j = 0} ^ {2} b_ {ij} frac {dy ^ i} {dx ^ i} frac {dy ^ j} {dx ^ j} = F left (x right$$

of $$F left (x right)$$ In these integrands and take the integrals.

Keep only the terms up to the second order in terms of $$c_1$$ Y $$c_2$$, we get a second-order expansion of the solution to $$y left (x right)$$ in terms of $$c_1$$ Y $$c_2$$.

What's wrong with this application of the parameter variation method?

Something definitely seems to be wrong with this. In particular, the results are numerically incorrect for the $$c_1 c_2$$ term when compared to a solution that is known to be correct but $$c_1 ^ 2$$ Y $$c_2 ^ 2$$ the terms are correct However, each step seems to be technically correct although quite unusual.

## python: I need help to understand what is wrong with this set of codes that I found online

I needed help with a laser tracking set for shape recognition codes, I ran the codes changing only the cv2.VideoCapture (0) to cv2.VideoCapture ("path to file"), tracks the laser but does not recognize the drawn shape as he did in this video linkhttps: //www.youtube.com/watch? v = 6k71zf_pARE and the code sets I'm using are from https://github.com/andrewnaguib/Detect-Trignometric-Shapes I want to know what happens

## opening – Astrophotography – What am I doing wrong here?

Basically, no one on Earth can hold a camera for 15 seconds.

Most people fight in just over 100 seconds.

If you need exposures in that kind of length, a tripod is essential, or a wall, a bag of beans, or something so you do not hold it.

Even so, open your opening as much as possible, because the celestial bodies move in relation to the Earth, so the longer the shutter is open, the more they move. Electronic tracking with that movement involves expensive hardware, so save it until you have more experience.

Raise the ISO as much as possible before starting to obtain an unacceptable noise, to keep the exposure as short as possible. Experiment with different values ​​and see which one you prefer.

## Is this a frontal crash or something else is wrong?

I am evaluating several daily notebooks (4) at the same time to perform a series of related calculations. I have been doing this for some time and I have never encountered what I experienced today.

I'm using v.12 lately and this is the first time I've met any type of problem with the front-end (or any type of problem, I even remember that I thought not many days ago that this iteration is really solid and feels more stable than previous versions).

Today I went back to my station only to find that Mathematica had been locked (the application was closed) and there was a Windows notification about an OpenGL error.

I think the calculations were completed because before evaluating the last notebooks in the evaluation queue, the expected output files were already stored in my local drive. However, I have not yet verified its integrity.

I suspect that the shock, if it is related to Mathematica, must have something to do with the visualization of `Data set` because the last pair of Notebook's evaluates and shows Datasets.

So, essentially, my question is

Has anyone experienced Mathematica v.12 failing under Windows 8 with the error?

"OpenGL" too many errors ""

When evaluating multiple notebooks at the same time or has someone experienced similar problems evaluating cells with moderate datasets?

Any help is appreciated.

P.S.. I went back to doing the calculations, without adding the last two.
Notebooks in the evaluation queue and everything went well this time.

## cryptography: why is it wrong to * implement * myself, a known encryption algorithm, published and widely believed?

I know the general advice that we should never design an encryption algorithm. It has been widely discussed about this site and the websites of professional caliber like Bruce Shneier.

However, the general council goes further: it says that we should not even implement algorithms designed by wiser than us, but rather, adhere to well-known and proven implementations made by professionals.

And this is the part that I could not find extensively discussed. I also did a brief search on the Schneier website; and I could not find this statement there either.

Therefore, why are we categorically recommended not to implement cryptographic algorithms? I would appreciate an answer with a reference to an acclaimed security expert who talks about this.

¹ More precisely, design the content of our hearts; It could be a good learning experience; but please Please Please Please, never use what we design.

## Code quality – Game Engine Architecture: what am I doing wrong?

I am working on a game engine (more like a graphical engine for physics simulations) in C ++.
I know Unity a little and I like the concept of GameObject and Component Unity and I wanted to use it also in my project.

The structure of the code at this time is as follows:

`Newtonic :: Motor`: it has the scene, the engine message bus and the engine asset manager.

`Newtonic :: Scene`: shared points with actors (the actor is really what in Unity is called GameObject)

`Newtonic :: Actor`: Only one object in the scene. It has no property in itself, it is only a container for Behaviors that completely defines how the actor behaves and how it is represented on the screen.

`Newtonic :: Behavior`: abstract class that specifies how an actor behaves when updating game logic or when rendering the scene

When `Newtonic :: Motor` should `To update` or `Do` the scene that calls `Newtonic :: Scene :: Update` or `Newtonic :: Scene :: Render` they in turn call `To update` or `Do` Methods in each actor in the scene and, finally, any actor calls him in whatever behavior he has.

The asset manager (`Newtonic :: Assets`) has all the resources that the game needs and stores them in a `std :: map`. Resources can be accessed by their identification from the asset manager that returns them as a `std :: weak_ptr` (so the property is not shared with the code that the resource requires)

This should be used to handle communications between separate parts of the engine. As the communications between the logic and the graphic parts of the engine, but still has no purpose.

• `Newtonic :: Behavior`s need to have access to their parents `Newtonic :: Actor` and its `Newtonic :: Behavior` brothers (that is, the other behaviors in the same actor). One possible way to implement this would be for each Behavior to store a reference to its main Actor, but this would create a problem of circular dependence as `behavior.h` I would need the definitions in `actor.h` Y `actor.h` already use `behavior.h` obviously. Even though I know this can be done, it knows how bad a code design to do this and I wanted to know if I'm missing something.

• `Newtonic :: Motor` It has widely used engine parts, such as the asset manager and the message bus (in the future it should be used at least). But no other part of the engine can access `Newtonic :: Motor`, and again passing it to secondary things like Scene creates the circular dependencies.

• Batch processing. At this moment there is a `MeshRenderer` behavior that leads `weak_ptr`s a a `Newtonic :: Shader` and a `Newtonic :: Mesh`. At `Do` This behavior method uses the shader with `glUseProgram` and joins VAO of the mesh. The problem with this is that if I have like 1000 `MeshRenderer`If you use the same mesh and shading, you will end up linking and unlinking the shader and the VAO 1000 times, which does not seem very efficient. Here, a solution could be using the message bus to program the mesh and shading that are represented in a `BatchRenderer` object (which should be owned by Engine, I suppose) and rendered after everything has been programmed.

The code is at: https://github.com/ekardnam/Newtonic thanks for your time!