## dnd 5e – How to map out and run an encounter set in a predominantly vertical area online?

Something like this. If you really want to have interesting stuff on all four sides of the shaft, have four adjacent rooms/maps that turn the corner at the edges. If the shaft is small enough to where people might be jumping across or shooting ranged weapons or something, just note down the distance across and use that when judging whether people can target or move to spaces of similar height on the other 3 pages.

I’ve done muliti-page battle maps for Z-levels in primarily horizontal dungeons with significant vertical elements for a particular room before. It slows things down significantly on Roll20 as compared to real-life because moving tokens large distances is really annoying due to the tabletops problematic interaction with scrollbars. It also took a couple minutes for the slower players to learn how the maps related to each other. Nevertheless, it worked out fine– the decrease in speed wasn’t crippling and the ability to make use of the additional spacial relations in the room was exciting for some of the players.

I’ve used side-view dungeons before. They work fine and are immediately obvious even to most very-slow-to-learn players, though I imagine that would be different if you were playing with someone who had great difficulty learning new ways of doing things and had a lot of experience with top-down battlemaps. They are appropriate for primarily vertical spaces, and benefit from a sort-of genre convention that one of the two horizontal axes not be particularly interesting in combat nor difficult to map without representation, sort of like Z-levels in conventional dungeons.

## unity – Ability System with Modular Targeting: having trouble finding a good way to map between input components and effect components

I’m trying to make an ability system with modular targeting/inputs for a turn-based RPG game (using Unity)

What I mean by that is that most ability systems I’ve found use an inheritance hierarchy where there are different sub-classes for each different type of targeting, e.g. SingleTargetAbility, PBAoEAbility, VectorAbility, etc.

However it seems like this structure lacks flexibility or results in a huge, unwieldy number of subclasses. For example, say you made single target abilities, like Holy Smite and vector (choose a direction radiating out from your character) abilities, like Flame Lance. Now let’s say you want to make a Push ability, where the player selects a unit, the selects a direction radiating out from that unit in which to push it. With the inheritance hierarchy method, it seems like you’d have to make a new class, SingleTargetThenVector ability to fit this use case. And so forth for any other combination of inputs.

Instead, I tried to make a system where an ability can have any number of player inputs (i.e. targeting, but also including stuff like “choose one of 3 options”), and any number of effects (like damage, push, heal, etc), and then the inputs are mapped to each effect.

This is what I came up with (abbreviated to include only the important parts):

``````    public class AbilityInfo : ScriptableObject
{
public string abilityName = "New Ability";

(SerializeField)
public string id = System.Guid.NewGuid().ToString();

public List<AbilityInputInfo> inputInfos;

public List<AbilityEffect> abilityEffects;
}

public abstract class AbilityInputInfo
{
public abstract string uiPrefabName { get; set; }

public abstract void promptInput(Ability ability);
}

public enum AbilityInputSource : int
{
Caster = -1,
First = 0,
Second = 1,
Third = 2
}

public class AbilityEffect
{
public List<AbilityInputSource> playerInputSources;

public EffectInfo info;
}

public class Ability : MonoBehaviour
{
public GameObject caster { get { return gameObject; } private set { } }

public AbilityInfo info { get; private set; }

private List<System.Object> _inputs;

public static void Create(GameObject unit, AbilityInfo info)
{
ability.info = info;
}
public void execute()
{
if (info.inputInfos.Count > 0)
{
_inputs.Clear();
Events.OnAbilityInput += onPlayerInput;
info.inputInfos(0).promptInput(this);
}
}

public void onPlayerInput(object input)
{

if (_inputs.Count == info.inputInfos.Count)
{
Events.OnAbilityInput -= onPlayerInput;
executeEffects();
}
else if (_inputs.Count > info.inputInfos.Count)
{
Debug.LogError(\$"Too many inputs for {info.abilityName}: expected {info.inputInfos.Count}, got {_inputs.Count}");
}
else
{
info.inputInfos(_inputs.Count).promptInput(this);
}
}

private void executeEffects()
{
foreach (AbilityEffect abilityEffect in info.abilityEffects)
{
List<System.Object> effectInputs = new List<object>();
foreach (int inputSource in abilityEffect.playerInputSources)
{
if (inputSource == (int)AbilityInputSource.Caster)
{
}
else
{
}
}
abilityEffect.info.execute(effectInputs);
}
}
}

public abstract class EffectInfo
{
public void execute(List<object> inputs)
{
//Some shared logic
_execute(inputs);
}

protected abstract void _execute(List<object> inputs);
}

public class MoveEffectInfo : EffectInfo
{
protected override void _execute(List<object> inputs)
{
GameObject unit = (GameObject)inputs(0);
Vector3Int targetPos = (Vector3Int)inputs(1);
Movement movement = unit.GetComponent<Movement>();
movement.move(targetPos);
}
}

public class GroundSelectInputInfo : AbilityInputInfo
{
public override string uiPrefabName { get { return "GroundSelectInputUi"; } set { } }

(SerializeField)
int range = 5;

public override void promptInput(Ability ability)
{
GameObject gameObject = Utility.InstantiatePrefab(uiPrefabName, GameObject.FindGameObjectWithTag("Canvas"));
GroundSelectInputUi ui = gameObject.GetComponent<GroundSelectInputUi>();
ui.initialize(ability.caster.position, range);
}
}

``````

There are a few issues with this though:

1. The mapping from input to effect is not type-safe.

Because there are many different types of inputs to effects (units, positions, etc.), I just use “object” as a catch-all and then plugging those into the execute of each effect, and then inside the effects casting to the correct type. However, if I input the mapping wrong in the editor (put 2 instead of 1), then it will be an execution error.

1. Distinguishing between dynamic player inputs and static inputs (like “caster” or “caster’s position”) is really janky.

I do it through an enum that represents both static inputs and indices to the dynamic inputs, but that seems really janky. For example, right now the indices only go up to three. I’d have to add FOUR, FIVE, SIX, etc. as I have abilities with more inputs.

1. Passing the inputs back to the ability is done through a brittle event system.

Because the input is done through Unity UI, I use an asynchronous callback event system and register a listener in the Ability object. However, the same listener is used for all AbilityInputs, so it’s possible that the Ability could receive input from a different UI than it expected (if the UI had a bug and sent multiple onPlayerInput events, for example), and the system wouldn’t recognize that.

I can’t seem to find a way around these problems, but it seems like there must be a way to make an ability system with modular targeting, although I haven’t seen any examples online.

Anyone know how these issues can be solved?

## opengl – Calculating Camera View Frustum Corner for Directional Light Shadow Map

I’m trying to calculate the 8 corners of the view frustum so that I can use them to calculate the ortho projection and view matrix needed to calculate to calculate shadows based on the camera’s position. Currently, I’m not sure how to convert the frustum corners from local space into world space. Currently, I have calculated the frustum corners in local space as follows (correct me if I’m wrong):

``````float tan = 2.0 * std::tan(m_Camera->FOV * 0.5);
float nearHeight = tan * m_Camera->Near;
float nearWidth = nearHeight * m_Camera->Aspect;
float farHeight = tan * m_Camera->Far;
float farWidth = farHeight * m_Camera->Aspect;

Vec3 nearCenter = m_Camera->Position + m_Camera->Forward * m_Camera->Near;
Vec3 farCenter = m_Camera->Position + m_Camera->Forward * m_Camera->Far;

Vec3 frustumCorners(8) = {
nearCenter - m_Camera->Up * nearHeight - m_Camera->Right * nearWidth, // Near bottom left
nearCenter + m_Camera->Up * nearHeight - m_Camera->Right * nearWidth, // Near top left
nearCenter + m_Camera->Up * nearHeight + m_Camera->Right * nearWidth, // Near top right
nearCenter - m_Camera->Up * nearHeight + m_Camera->Right * nearWidth, // Near bottom right

farCenter - m_Camera->Up * farHeight - m_Camera->Right * nearWidth, // Far bottom left
farCenter + m_Camera->Up * farHeight - m_Camera->Right * nearWidth, // Far top left
farCenter + m_Camera->Up * farHeight + m_Camera->Right * nearWidth, // Far top right
farCenter - m_Camera->Up * farHeight + m_Camera->Right * nearWidth, // Far bottom right
};
``````

How do I move these corners into world space?

## Map remote URL to local folder

I have outdated software installer. It tries to download `zip` library on web, but link is dead and installation freezes.
I’ve found this library on other site and downloaded. But I need to give this file to installer instead of URL.

Is there some way?

## gr.group theory – Typical preimage of the commutator map

By Goto’s theorem for any compact connected semisimple Lie group $$G$$ of dimension $$n$$, any element $$xin G$$ is a commutator, namely $$x=(y,z)$$ for some $$y, zin G$$. Another way to say it is that the commutator map $$pi:Gtimes Grightarrow G$$ is surjective. By Sard’s Lemma it follows that typical element $$win G$$ is a regular value of $$pi$$ and $${pi}^{-1}(w)subset Gtimes G$$ is a smooth compact submanifold of dimension $$n$$.

Question: what is the homeomorphic type of this manifold for typical $$w$$?

Of course it is tempting to suspect that $${pi}^{-1}(w)$$ is homeomorphic to $$G$$ but somehow I have difficulty in checking it even in rather simple case of $$G=SO(3)$$

## list manipulation – Can you map an AssociationThread of multiple sublists?

I have two lists that look like this:

``````d={{1,1,3,5,7,2},{1,1,3,5,6,7,2}}

dd={{A1,A1,A3,A5,A7,A2},{A1,A1,A3,A5,A6,A7,A2}}
``````

If I use `AssociationThread[d[[1]],dd[[1]]]` that associates the first sublists in each list correctly, but is it possible to `Map`the `AssociationThread` over multiple sublists? I’ve tried the syntax of adding in `Map` but I only ever seem to get an output that resembles this:

``````Map[ <|{1,1,3,5,7,2}->{A1,A2,A3,A5,A7,A2}|>]
``````

## flutter – Como salvar dado tipo map ou array no firestore

Tentei da seguinte forma:

``````createProduto(Produto produto){
Map<String, dynamic> model = {
"nome" : produto.nome,
"descricao" : produto.descricao,
"precoCompra" : produto.precoCompra,
"itens" : produto.itens // <================ List<Item>
};

});
}
``````

Classe item:

``````class Item {
String nome;

}
``````

Erro:

(ERROR:flutter/lib/ui/ui_dart_state.cc(157)) Unhandled Exception:
Invalid argument: Instance of ‘Item’

## 3d – What’s wrong with my normal map implementation?

I’m trying to write my own 3D engine from scratch in C, and right now it can render spheres, but I wasn’t able to perfectly implement norl maps.

All of the resources I’ve found online about normal maps are about applying them to meshes and talk about making sure yso I just kind of guessed the right way to do it for a sphere. The basic algorithm I’m using to perturb the normal vector is:

``````struct _Vec3 {
double x;
double y;
double z;
};

struct _Rgb {
unsigned char r;
unsigned char g;
unsigned char b;
};

void adjustNormal(Sphere* sphere, Vec3* point, Vec3* normal) {
double* arr = getSphereCoordinates(sphere, point); // get coordinates of point, where 0 < x, y < 1
double x = arr(0);
double y = arr(1);
Rgb* color = getPixel(sphere -> normalMap, x, y); // get the data from normal map
x = (color -> r) / 255.0;
y = (color -> g) / 255.0;
double z = (color -> b) / 255.0;
Vec3** ortho = getOrthogonalVectors(normal); // basis of tangent space
Vec3* tangent = ortho(0);
Vec3* bitangent = ortho(1);
scaleVec3(normal, z);    // scale normal vector by z
scaleVec3(tangent, y);   // scale tangent vector by y
scaleVec3(bitangent, x); // scale bitangent vector by x
normalize(normal);
}
``````

Now, I feel pretty confident that `getSphereCoordinates` and `getPixel` works because applying a regular texture (not normal map) to a sphere works fine. I’m less confident about `getOrthogonalVectors` because even if I change that, I still have an issue with the normal map. Just in case, here it is:

``````Vec3** getOrthogonalVectors(Vec3* vec) {
Vec3 temp;
copyVec3(&temp, vec);
temp.x += 10; // no matter how I change
temp.y += 1;  // this to get a nonparallel vector,
temp.z += 1;  // the normal map isn't correct

Vec3* v1 = cross(vec, &temp);
Vec3* v2 = cross(vec, v1);
scaleVec3(v2, -1); // necessary for orientation

normalize(v1);
normalize(v2);

Vec3** ans = malloc(sizeof(Vec3*) * 2);
ans(1) = v1;
ans(0) = v2;
return ans;
}
``````

Below is the normal map being used and what the sphere looks like after the map is applied.

This picture is with the light source being at the same location as the camera, so in theory the entire front of the sphere should be lit. Is there something I’m doing wrong?

## javascript – D3 Chloropleth Map Stops After Rendering One Region

I’m trying to create a chloropleth map of Morocco in D3, colored differently based on population density. Currently, it renders the first region in the GeoJson, but thats the only thing it renders. This is the part where I load and render:

``````d3.csv("moroccoCSV.csv", function(data) {

//Set input domain for color scale
color.domain((
d3.min(data, function(d) { return d.popDensity; }),
d3.max(data, function(d) { return d.popDensity; })
));

var url = "Morocco.json";
d3.json(url, function(geojson) {

for (var i = 0; i < data.length; i++) {

//Grab state name
var dataState = data(i).region;

//Grab data value, and convert from string to float
var dataValue = parseFloat(data(i).popDensity);

//Find the corresponding state inside the GeoJSON
for (var j = 0; j < geojson.features.length; j++) {

var jsonState = geojson.features(j).properties.NAME_1;

if (dataState == jsonState) {

//Copy the data value into the JSON
geojson.features(j).properties.value = dataValue;

//Stop looking through the JSON
break;

}
}
}
svg.selectAll("path")
.data(geojson.features)
.enter()
.append("path")
.attr("d", path)
.style("stroke", "black")
.style("fill", function(d) {
//Get data value
var value = d.properties.value;

if (value) {
//If value exists…
return color(value);
} else {
//If value is undefined…
return "#ccc";
}
});
});
});
``````

The console registers the values of each region, and their path as well, but does not render them. Also, if I replace .attr(“d”, path) with .attr(“d”, path(geojson), the map prints all the regions but they are all of the same fill color.
Any ideas?

## The problem

I have two sets of numbers and need to find a mapping between those two sets, so that the total distance between two mapped numbers is as small as possible. Two numbers must not be mapped if they are farther apart then `0.18`. As many numbers should get mapped as possible.

Also, the sets are not necessarily the same size. So, consequently, some numbers of the larger set won’t get any mapping.

Example:

Is there a reasonably efficient algorithm that finds a mapping like this? Or, is there a term for this specific problem so that I can research algorithms on my own?

## My research

Through googling I encountered this question, which led me to the term “Euclidean Bipartite Matching Problem” which seems to be the term for a problem very similar to mine. However, my problem is slightly different than the Euclidean Bipartite Matching Problem.

So basically, I’m looking for an efficient algorithm for the 1-dimensional Euclidean Bipartite Matching Problem except that the two sets of numbers can be of differing size, and the distance between two numbers must not exceed `0.18`.

## My attempt

I’ve already coded my own implementation, however… it doesn’t work properly and is pretty complicated too so that I’m not even sure why it doesn’t work.

As for the basic idea behind my implementation: let’s call the first set the red numbers and the second set the blue numbers (apparently that’s the terminology used in the Euclidean Bipartite Matching Problem). Now;

1. go through all red numbers, and for each:
1. find the closest blue number within a ±0.18 range
2. if the blue number is already assigned to a different red number:
• if the existing assigned red number is nearer than our red number, skip this blue number
3. assign our red number to the blue number
4. if we overwrote a previously-assigned red number in the process, make the red number find itself a new blue number (i.e. make the red number go through steps 1-4 again)

(I’m doubtful that this implementation is even correct) but yeah, this is what I tried so far.

Are there well-known algorithms to do this task, so that I don’t have to create a wonky, non-functioning, slow implementation myself? Or in general, is there a term for this specific problem? Then I could google for that term and find what I need.