## 3d: correct distortion resulting from lightning strikes

I am trying to combine the ray-walking algorithm and conventional rendering methods.

My goal is to represent a sphere and a corresponding atmosphere that gradually fades into space.
It works pretty well and it looks Okay, but the problems come when rendering the sphere (triangle-mesh) and the atmosphere simultaneously due to perspective.

The image resulting from ray traversal tends to distort "drastically" at the ends more than conventional rendering methods.

I use the following projection matrix for conventional rendering:

``````Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, (float)Width / (float)Height, 0.01f, 1000.0f);
``````

For raymarching I use the following two methods:

``````float3 rayDirection(float fieldOfView, float2 size, float2 fragCoord)
{
float2 xy = fragCoord - size / 2.0;
float z = size.y / tan(radians(fieldOfView) / 2.0);
return normalize(float3(xy, -z));
}

float3x3 viewMatrix(float3 eye, float3 center, float3 up)
{
// Based on gluLookAt man page
float3 f = normalize(center - eye);
float3 s = normalize(cross(f, up));
float3 u = cross(s, f);
return float3x3(
s,
u,
-f
);
}
``````

from http://jamie-wong.com/2016/07/15/ray-marching-signed-distance-functions/

These are the results:

Is there a method to achieve exactly the same perspectives?
_ _

## A – How can I obtain the dataframe resulting from estimating a binomial logit model with glm?

I have a `dataframe` With several missing values and I want to make a analysis of variance (ANOVA) to compare two models binomial logit:

• Model A: Contains a set of variables.
• Model B: Contains the same variables as Model A plus 3 study variables.

We import the data:

Model A:

``````modelo_logit <- glm(SAP ~ sexo + edad + peso + niv_est + enf_cron + sit_lab +
frec_act_fis + ingreso_eq + GHQ_12,
data = datos_modelo, family = binomial(link = "logit"),na.action = "na.omit")
``````

Model B:

``````modelo_logit_viv <- glm( SAP ~ sexo + edad + niv_est + enf_cron + sit_lab +
frec_act_fis + ingreso_eq + GHQ_12 +
n_dormitorios + cont_indus + delincuencia, # variables de estudio
data = datos_modelo, family = binomial(link = "logit"),na.action = "na.omit")
``````

In order to carry out a ANOVA I execute: `anova(modelo_logit,modelo_logit_viv)`

And I get the following error:

Error in anova.glmlist (c (list (object), dotargs), dispersion = dispersion,:
models were not all fitted to the same size of dataset

Both models must be fit by the same data set, but since there are several missing values, model B has more `NA's` than model A (since model B contains more variables that increase the number of observations to be eliminated with respect to model A).

My question then is: How can I get the dataframe that is used in model B (`modelo_logit_viv`) to estimate model A (`modelo_logit`) and have well the same dataframe to estimate both models and then carry out the ANOVA? There must be some element inside the object generated by `glm(*)` that contains the `dataframe` that has been used to estimate once the `NA's`, but I can't find it.

## an emergency vehicle repair is resulting in an excessive stay in the US. USA [closed]

an emergency vehicle repair is resulting in an excessive stay in the US. USA Will this affect my 182 days

## numerical integration – FEM: Obtaining the resulting electric force in each body of the electric field

Based on this excellent FEM response: electric field between two arbitrarily defined shapes, I can calculate the electric field `ef` between two conductive objects.

$$F = qE$$
Now, I tried to calculate the resulting total electric force in each object (acting at its geometric center), simply integrating the electric field around the object's boundary:

``````NIntegrate(
Evaluate(ef), {x, y} (Element)
Region`RegionProperty(RegionBoundary(object1), {x, y},
"FastDescription")((1))((2)))
``````

But it does not work. Any help would be highly appreciated.

Here is the complete code to calculate the electric field:

``````Needs("NDSolve`FEM`");
(*Define Boundaries*)
air = Rectangle({-5, -5}, {5, 5});
object1 = Rectangle({-2.5, 2.5}, {2.5, 2});
object2 = Rectangle({-2.5, -2.5}, {2.5, -2});
reg12 = RegionUnion(object1, object2);
reg = RegionDifference(air, reg12)

mesh = ToElementMesh(reg, MaxCellMeasure -> 0.1);
mesh("Wireframe")

eq = Laplacian(u(x, y), {x, y}); V1 = 1; V2 = -2;
bc = {DirichletCondition(u(x, y) == V1,
Region`RegionProperty(RegionBoundary(object1), {x, y},
"FastDescription")((1))((2))),
DirichletCondition(u(x, y) == V2,
Region`RegionProperty(RegionBoundary(object2), {x, y},
"FastDescription")((1))((2)))};
U = NDSolveValue({eq == 0, bc}, u, {x, y} (Element) mesh);

ef = -Grad(U(x, y), {x, y});

StreamDensityPlot(Evaluate(ef), {x, y} (Element) reg,
ColorFunction -> "Rainbow", PlotLegends -> Automatic,
FrameLabel -> {x, y}, StreamStyle -> LightGray, VectorPoints -> Fine,
PlotRange -> Automatic)
``````

## java – Heuristic for board game AI resulting in draw

I am developing a program for an action line game (see specifications here: http://www.boardspace.net/loa/english/index.html)

For AI, I've created a game tree, quite similar to most game tree algorithms, and it seems to work.

Now, I'm not sure how to create the heuristic, basically what happens when the depth you're looking for good moves is zero.

Basically it is a function that is supposed to help me evaluate the "score" of the movement.

Based on the specs, I've done a few different factors in my function, but I'm not sure how to weigh them.

This is what I'm considering (by the way, I'm optimizing for the white player):

• game over? if so, it returns the winner or 0 if it is a tie
• Compare the largest region sizes for both players, favoring WP
• How many contiguous piece regions do both players have? The more the worse then numRegionsBP – numRegionsWP
• how many pieces are left? The more the better.
• possible legal moves with the current board. I repeat and check if any leads to BP or WP with all its contiguous parts (again, optimizing for WP)

Here is my code to find the score. Unfortunately, it always results in a tie, when you should be optimizing one side. Should I change my factors or weights? I would appreciate some guidance:

``````    private int staticScore(Board board) {
if(board.gameOver() && board.winner()!=null) {
if(board.winner().equals(WP)){
return WINNING_VALUE;
}
else if(board.winner().equals(BP)) {
return -WINNING_VALUE;
}
else {
return 0;
}
}
//factor for contiguous regions size
List whiteRegions = board.getRegionSizes(WP);
List blackRegions = board.getRegionSizes(BP);
int maxW = (int) Collections.max(whiteRegions);
int maxB = (int) Collections.max(blackRegions);
int contigRegions = maxW - maxB;

//factor for number of contiguous regions
int regionsNumW = whiteRegions.size();
int regionsNumB = blackRegions.size();
int regionsNumDifference = regionsNumB - regionsNumW;

//factor for number of pieces left - maybe delete
int numW = board.getNum(WP);
int numB = board.getNum(BP);
int numDif = numW - numB;

//account for moves to get me
int piecesContig = 0;
for (Move mv : getBoard().legalMoves()) {
getBoard().makeMove(mv);
if ((getBoard().piecesContiguous(BP))) {
piecesContig = -1;
break;
}
else if((getBoard().piecesContiguous(WP))){
piecesContig = 1;
break;
}
getBoard().retract();
}
return 20*contigRegions + 10*regionsNumDifference + 20*piecesContig;
}

``````

## algorithms: list the terms resulting from the decomposition of a number by repeated divisions by 2

Consider a natural number $$n> 1$$. We express it as $$lfloor frac n 2 rfloor + lceil frac n 2 rceil$$. We repeat the process for each of the two terms until all terms are 1 or 2. For example $$9 = 4 + 5 = 2 + 2 + 2 + 3 = 1 + 1 + 1 + 1 + 1 + 1 + 1 + 2$$.

There will be $$2 ^ { lfloor log_2 n rfloor}$$ terms because decomposition forms a full height binary tree $$lfloor log_2 n rfloor$$.

I am looking for an iterative form of this recursive process. Enumeration $$a_0 = 0, a_ {i + 1} = left lfloor frac {(i + 1) cdot n} {2 ^ { lfloor log_2 n rfloor}} right rfloor – a_i$$ it approaches because it meets the following conditions: (a) each term is 1 or 2; (b) the sum of the first $$2 ^ { lfloor log_2 n rfloor}$$ the terms are $$n$$. But the elements are not identical to the recursive decomposition form.

Any help will be welcome. Thank you!

## c ++: divide a list so that the resulting two lists have the same average

Description of the problem:

The purpose of this function is to take a list of ordered numbers and divide it into two evenly balanced lists. By balanced I mean that the numbers in the two lists have the highest possible average. Simply put, the resulting lists must have the same number of large numbers as small numbers. The algorithm I use to remove the largest and smallest numbers from the input list and add alternatives by appending them to an output list. I am open to other algorithms.

Assumptions

• the entry list is sorted in ascending order
• all the values ​​in the list are greater than 0
• there are no duplicates in the list
• the list will have at least 3 elements

Correct examples:

``````{2,4,5,9}=>{2,9},{4,5}
{1,2,3}=>{1,3},{3}
{1,2,3,4,5,6}=>{1,6,3},{2,5,4}
{1,2,3,4,5,6}=>{2,6,3},{1,5,4}
``````

Incorrect examples:

``````{1,2,3,4,5,6}=>{1,2,3},{4,5,6}
{2,4,5,9}=>{2,4},{5,9}
``````

The code:

``````#include
#include
#include
#include

using namespace std;

/*prototypes*/
void splitInTwo(vector in, vector &out1, vector &out2);//should the last two be past by const reference?
void displayContents(const vector in);

int main()
{
cout << "program started" << endl;
vectora = {2,3,4,5,6,7,8,10};
vectorb = {2,3,5,7,8,12,20,40};
vectorc = {1,2,3};
vectord = {10, 15, 33};
vectore = {10, 20, 30, 40, 50, 60, 70};
vectorf = {1,2,3,4,5,6};
vectorg = {1,2,3,4,5,6,7,8,9,10};
vectorh = {1,2,3,4,5,6,7,8,9,10,11,12};
vectori = {1,2,3,4,5,6,7,8,9,10,11,12,13};
vectorj = {1,2,3,4,5,6,7,8,9,10,11,12,14};

vector out1, out2;

splitInTwo(a, out1, out2);
splitInTwo(b, out1, out2);
splitInTwo(c, out1, out2);
splitInTwo(d, out1, out2);
splitInTwo(e, out1, out2);
splitInTwo(f, out1, out2);
splitInTwo(g, out1, out2);
splitInTwo(h, out1, out2);
splitInTwo(i, out1, out2);
splitInTwo(j, out1, out2);

return 0;
}

void splitInTwo(vector in, vector &out1, vector &out2)
{
out1.clear();
out2.clear();
out1.reserve(ceil(in.size()/2));
out2.reserve(floor(in.size()/2));
bool alternate = true;
for(int i = 0, j = in.size() - 1; i <= j; i++, j--)//why exactly doesn't auto work here?
{
if(i == j)//i and j point to same element
{
if(alternate)
{
out1.push_back(in(i));
}
else
{
out2.push_back(in(i));
}
}
else if(j - i == 1)//j and i point to adjacent elements
{
if(out1.size() < out2.size())
{
out1.push_back(in(i));
out1.push_back(in(j));
}
else if(out1.size() > out2.size())
{
out2.push_back(in(i));
out2.push_back(in(j));
}
else//equal size
{
out1.push_back(in(i));
out2.push_back(in(j));
}
break;
}

else if(alternate)
{
out1.push_back(in(i));
out1.push_back(in(j));
}
else
{
out2.push_back(in(i));
out2.push_back(in(j));
}
alternate = !alternate;//NB operator is not !=
}

assert(out1.size() - out2.size() <= 1 && "incorrect length of return vector");

//for testing only
cout << "in: " << endl;
displayContents(in);
cout << "out: " << endl;
displayContents(out1);
displayContents(out2);
}

void displayContents(const vector in)
{
for(auto i : in)
cout << i << ", ";
cout << "n";
}
``````

Specific question:

At first I thought the problem was much simpler to solve. It would be nice to remove some of the nested variables or if statements from the code. In the outer loop, I'm curious why `auto` could not be used? I guess it's because the literal `0` is a `int` Y `size()` returns a `unsigned int`?

Since I thought the problem was easier to solve, some aspects of the code did not scale well. For example, I would like to put all test cases in an array. Any comments on unit testing or general design principles? Any comments are welcome, I would like to optimize this learning experience 🙂

Similar work:

There is a similar problem here. However, this is different because the entry list does not need to be sorted. In the analysis of their "Efficient Solution" they give "time complexity" as O (n). Mine I think has the runtime of θ (n / 2). Is this correct? In this context, isn't it more correct to discuss execution time than time complexity?

## dnd 5e – If a Fireball Necklace with 9 beads is thrown, is the resulting Fireball effect equivalent to a level 9 throw or a "level 11" throw?

### You can throw a full necklace for the equivalent of a "level 11" Ball of fire

The description of the Fireball necklace read (DMG, p. 182):

This necklace has 1d6 + 3 beads hanging on it. You can use an action to separate an account and throw it up to 60 feet away. When it reaches the end of its trajectory, the account will detonate as a third level. ball of fire spell (save DC 15).

You can launch multiple accounts, or even the entire necklace, as a single action. When you do, increase the level of ball of fire for 1 for each account beyond the first.

This means that you can throw a full necklace for the equivalent of a "level 11" Ball of fire.

This is due to the specific mechanics of this article; It is an exception to a general rule (see PHB p.7 under General Specific Hits) In most circumstances, you cannot cast a spell beyond the ninth level, simply because there are no spaces of level 10 or 11.

However, in this case, if you were lucky enough to get a Fireball necklace with 9 beads (1d6 + 3), and you threw the entire necklace, you would be causing damage equal to a level 11 Ball of fire spell out.

Ball of fire It is already a third level spell, so adding +8 accounts takes the spell to an equivalent 11th level spell.

The damage would be: 8d6 + 8d6, or an average of 56 fire damage points in a failed Dexterity save cast, which is not much, considering a level 9 spell as Meteor swarm For example, it can deal 40d6 points of fire damage or 140 on average in a failed save shot.

As a side note, you can do much more damage by throwing one account at a time! Throwing them in this way, a necklace with 9 beads can potentially cause 9 times 8d6 damage, which would be an average of 252 (9 times 28) damage on failed save throws.

## dnd 5e: Can a ghost (undead creature) take over a clone body (resulting component of the spell)?

As for knowing, ghosts (and demons) have ingenious powers of ownership and possession, and this is often an ingenious game mechanic. In fiction-fantasy, they take care of the most ridiculous things, from cars to people or even entire houses. Out and about wherever they want it's fun

This tradition continues in 5e. The ghost still has this mechanic, at least in humanoids (not to mention if dragons, giants or other sentient beings I have ghosts, but I digress). The demons also mention this in their description (monster manual) and can take care of the objects, although the real mechanics of how a DM should execute this, or how and / or who they can take over, is not mechanically developed (by Please correct me if I am wrong about this, such rules or resolutions WILL FULLY ROCATE).

This brings us to the question (above): when the cloning spell is cast / matures / 120 days, they become perfect (although without scars and possibly much younger) humanoid-version of the cubic inch of meat humanoid objective component. Groovy! When this original owner of that cubic inch-inch person & # 39; DIES & # 39; (zero hp? Choose to leave? Didn't he become undead?), His soul (and / or spirit?) leaves his body and simply leaves (astral? like a ghost? ethereal? teleports? factor deformation ten? speed force?) to its BodyPrime / Clone Location. He wakes up (full hit points? All spells? Remember his death? PTSD?) And he's ready to conquer the world. Easy!

Assuming this 120-day Clone body + is Ready and ready for actionWhy can't a passing ghost (or demon really) simply, you know, take control for a moment? Or why not forever? Why leave if / when knocked out (& # 39; zero hit points & # 39;), since there is & # 39; imprint & # 39; of the original essence of soul-spirit-soul-bit-bit? Why can't a wizard (or bard or Nagpa or a powerful dragon launcher … or whatever) just get into a full business pumping Brad Pitt's bodies when he was so sexy … or any actress called & # 39; Jennifer & # 39; That's why it matters … or practically anyone from whom you can get a meat cube by adding Gentle Repose style magic?

MANY CAMPAIGN IDEAS!

But wait a moment! How much of this is actually RAW? Fortunately we have the Good Lads (and Ladies) of StackExchange to tell me what is what. Here good people tell me that I have had too many magic mushrooms.

Soon How do ghosts (and / or any alien spirit like a demon) interact with a clone body of more than 120 days? Can you take it temporarily? To what extent could / would this work as a home body for that creature?

Honestly, this would be fun. One can imagine a full version of Altered Carbon D&D … but we must respect the RAW 5e. What is that exactly?

Also: I understand that this question is misspelled (& # 39; entertaining but not concise & # 39;). I will gladly modify this to meet the stoic restrictions (dour? Terse?) Required by the StackExchange Mod-Editorial community! =)

## php – How to count how many times a column is repeated in a SELECT query without affecting the resulting query in MYSQL?

I came across this case, I hope you can help me, I would like to know if it is possible to gather the data in a single MYSQL query but without affecting the resulting query, for example:

A table of students with the following fields:

Name, last name, kind

Data:
Juan, Arias, R1
Daniel, Garcia, R2
Pedro, Valencia, R4
Valeria, Muñóz, R2

The question is how many times the field 'kind' it is repeated, that is to say if I carry out the query:

``````select * from estudiantes
``````

To bring

``````nombre, apellido, tipo, R2, R1, R3
Juan,  Arias,     R1    2,  1,  1
Daniel,  Garcia,  R2,   2,  1,  1
Pedro,  Valencia, R3,   2,  1,  1
Valeria,  Muñóz,  R2,   2,  1,  1
``````

So in this way I would know that in the query that the type was made R1 repeats 1 time, the guy R2 is repeated 2 times and the type R3 It is repeated once.

I thought about doing the sums for items already having the query or from jquery already getting the data, but the problem is not known how many types of students will be in the bd, they could be from R1 until R9 Or until RN and that takes away that possibility.