testing – How to simulate test data to a database?

The best way to get the data for test in is the same way that it will get in in production, because its the most realistic.

So the question here is how exactly does this third part software get its data in? inserts, a sproc, an API, SSIS? use the same method to insert your test data.

If you don’t know or cant tell, then you could run the tool and check the database log, or monitor the network traffic. perhaps you can even rerun the transactions

virtual machine – How do I simulate pluging & unplugging the mouse in a virtualbox VM on Ubuntu?

To test the Xi (X Input), I’d like to have a way to connect/disconnect my input devices in my VirtualBox system. That should generate an X-Windows message letting me know that the list of input devices changed.

Looking at the VirtualBox settings, I do not see a way to do that there.

Looking at the menus, I’m thinking maybe it is there, in the Inputs menu, but the menu doesn’t show up at all (not at the very top of the screen, not in the VM window or status bar).

Note: The host is Ubuntu 18.04 and I use the snap version of VirtualBox (the newest 6.1.18).

networking – Attempting to drop packets to simulate a network failure

I have a switch running Linux with multiple devices on it.
I’m attempting to simulate a network outage on one of the devices.

I’ve attempted to drop the packets with both netem and iptables using either one of the commands:

tc qdisc add dev <interface> root netem loss 100%

or

iptables -A INPUT -i <interface> -p all -j DROP

However, the device stays connected.

Any ideas?

How can I simulate a bayer filter (or just RGB channels) using photoshop layers?

How can I essentially combine pure red, green, and blue info in 3 layers to create full color?

You have to start with pure ‘Red’, pure ‘Green’, and pure ‘Blue’ color information. But that’s not what you can get from a Bayer masked sensor, since the actual colors of each set of filters are not ‘Red’, ‘Green”, and ‘Blue’.

It’s not what we get from the cones in our retinas, either.

Keep in mind that there’s no specific color intrinsic in any wavelength of visible light, or other wavelengths of electromagnetic radiation for that matter. The color we see in a light source at a specific wavelength is a product of our perception of it, not of the light source itself. A different species may well not perceive wavelengths included in the human defined visible spectrum, just as many species of bugs and insects can perceive light in near infrared wavelengths that do not produce a chemical response in human retinas.

Color is a construct of of how our eye-brain system perceives electromagnetic radiation at certain wavelengths.

Our Bayer masks mimic our retinal cones far more than they mimic our RGB output devices.

The actual colors to which each type of retinal cone is most sensitive:

enter image description here

Compare that to the typical sensitivity measurements of digital cameras (I’ve added vertical lines where our RGB – and sometimes RYGB – color reproduction systems output the strongest):

enter image description here

The Myth of “only” red, “only” green, and “only” blue

If we could create a sensor so that the “blue” filtered pixels were sensitive to only 420nm light, the “green” filtered pixels were sensitive to only 535nm light, and the “red” filtered pixels were sensitive to only 565nm light it would not produce an image that our eyes would recognize as anything resembling the world as we perceive it. To begin with, almost all of the energy of “white light” would be blocked from ever reaching the sensor, so it would be far less sensitive to light than our current cameras are. Any source of light that didn’t emit or reflect light at one of the exact wavelengths listed above would not be measurable at all. So the vast majority of a scene would be very dark or black. It would also be impossible to differentiate between objects that reflect a LOT of light at, say, 490nm and none at 615nm from objects that reflect a LOT of 615nm light but none at 490nm if they both reflected the same amounts of light at 535nm and 565nm. It would be impossible to tell apart many of the distinct colors we perceive.

Even if we created a sensor so that the “blue” filtered pixels were only sensitive to light below about 480nm, the “green” filtered pixels were only sensitive to light between 480nm and 550nm, and the “red” filtered pixels were only sensitive to light above 550nm we would not be able to capture and reproduce an image that resembles what we see with our eyes. Although it would be more efficient than a sensor described above as sensitive to only 420nm, only 535nm, and only 565nm light, it would still be much less sensitive than the overlapping sensitivities provided by a Bayer masked sensor. The overlapping nature of the sensitivities of the cones in the human retina is what gives the brain the ability to perceive color from the differences in the responses of each type of cone to the same light. Without such overlapping sensitivities in a camera’s sensor, we wouldn’t be able to mimic the brain’s response to the signals from our retinas. We would not be able to, for instance, discriminate at all between something reflecting 490nm light from something reflecting 540nm light. In much the same way that a monochromatic camera can not distinguish between any wavelengths of light, but only between intensities of light, we would not be able to discriminate the colors of anything that is emitting or reflecting only wavelengths that all fall within only one of the the three color channels.

Think of how it is when we are seeing under very limited spectrum red lighting. It is impossible to tell the difference between a red shirt and a white one. They both appear the same color to our eyes. Similarly, under limited spectrum red light anything that is blue in color will look very much like it is black because it isn’t reflecting any of the red light shining on it and there is no blue light shining on it to be reflected.

The whole idea that red, green, and blue would be measured discreetly by a “perfect” color sensor is based on oft repeated misconceptions about how Bayer masked cameras reproduce color (The green filter only allows green light to pass, the red filter only allows red light to pass, etc.). It is also based on a misconception of what ‘color’ is.

How Bayer Masked Cameras Reproduce Color

Raw files don’t really store any colors per pixel. They only store a single brightness value per pixel.

It is true that with a Bayer mask over each pixel the light is filtered with either a “Red”, “Green”, or “Blue” filter over each pixel well. But there’s no hard cutoff where only green light gets through to a green filtered pixel or only red light gets through to a red filtered pixel. There’s a lot of overlap.² A lot of red light and some blue light gets through the green filter. A lot of green light and even a bit of blue light makes it through the red filter, and some red and green light is recorded by the pixels that are filtered with blue. Since a raw file is a set of single luminance values for each pixel on the sensor there is no actual color information to a raw file. Color is derived by comparing adjoining pixels that are filtered for one of three colors with a Bayer mask.

Each photon vibrating at the corresponding frequency for a ‘red’ wavelength that makes it past the green filter is counted just the same as each photon vibrating at a frequency for a ‘green’ wavelength that makes it into the same pixel well.³

It is just like putting a red filter in front of the lens when shooting black and white film. It didn’t result in a monochromatic red photo. It also doesn’t result in a B&W photo where only red objects have any brightness at all.
Rather, when photographed in B&W through a red filter, red objects appear a brighter shade of grey than green or blue objects that are the same brightness in the scene as the red object.

The Bayer mask in front of monochromatic pixels doesn’t create color either. What it does is change the tonal value (how bright or how dark the luminance value of a particular wavelength of light is recorded) of various wavelengths by differing amounts. When the tonal values (gray intensities) of adjoining pixels filtered with the three different color filters used in the Bayer mask are compared then colors may be interpolated from that information. This is the process we refer to as demosaicing.

What Is ‘Color’?

Equating certain wavelengths of light to the “color” humans perceive that specific wavelength is a bit of a false assumption. “Color” is very much a construct of the eye/brain system that perceives it and doesn’t really exist at all in the portion of the range of electromagnetic radiation that we call “visible light.” While it is the case that light that is only a discrete single wavelength may be perceived by us as a certain color, it is equally true that some of the colors we perceive are not possible to produce by light that contains only a single wavelength.

The only difference between “visible” light and other forms of EMR that our eyes don’t see is that our eyes are chemically responsive to certain wavelengths of EMR while not being chemically responsive to other wavelengths. Bayer masked cameras work because their sensors mimic the trichromatic way our retinas respond to visible wavelengths of light and when they process the raw data from the sensor into a viewable image they also mimic the way our brains process the information gained from our retinas. But our color reproduction systems rarely, if ever, use three primary colors that match the three respective wavelengths of light to which the three types of cones in the human retina are most responsive.

movement – How can I simulate a super jump in Rotted Capes?

Reading through the Rotted Capes rule book, I found a few moving types, such as tunneling, teleportation, and flight. I also found jumping mentioned in surge. Aside from that side-note, I did not find anything about jumping at all.

How can I build a super that can leap quite far (like a frog powers-based super)? Though, using fly with a limitation that it can only be used for single moves each and only when standing on solid ground when the move starts.

How can I best simulate a super jump power with the rules?

rendering – How do you simulate UV light and materials with UV-reflective/fluorescent properties within a PBR renderer?

Disclaimer: I don’t have experience with PBR; this answer is just filtering my physics knowledge through computer graphics concepts.

Fluorescence can be understood in general as: a material absorbs light of some wavelength (/frequency/energy), and then emits light of another (longer, lower energy) wavelength. This is not exclusive to UV (e.g. a material may be excited by blue light and emit green light).

Phosphorescence (a.k.a. “glow in the dark”) is functionally the same phenomenon except that the re-emission is slow, on a time scale we can perceive. If you want to model phosphorescence then it would have to be as an independent light source rather than a kind of reflection, since it’s stateful.

what do I need to extend with what kinds of properties

You need to simulate UV light propagation — light sources and ordinary reflections. An additional color channel may be suitable. Note that common window glass passes some but not all UV; if you care about this and similar effects you might need multiple UV channels.

Each fluorescent material needs a color of visible light it emits, and the efficiency of the conversion (which, in the model, can just be the brightness of that color). It may also need the spectrum it absorbs (if you care about modeling the different absorption of different UV wavelengths or even green light) and a decay rate (if you are modeling phosphorescence).

and what are the equations to wire them together

UV light propagation follows the same rules, with different material-specific parameters, as visible light propagation.

Fluorescence could be calculated in the same way as a diffuse reflection (and transmission for thin objects such as a sheet of plastic), except that instead of multiplying the incoming light componentwise by the reflectance or transmittance, you take the dot product of the incoming light and the absorption spectrum (or select only the UV component of the light if you’re not modeling the absorption explicitly), obtaining a scalar, and then multiply the emission color by that scalar.

If you want to include phosphorescence (which is only relevant in response to time-varying UV lighting conditions), then the abovementioned scalar needs to be a persistent state of the object (or individual texels of its surface) — think of it as the energy storage powering a light source. The UV absorption adds energy and the visible emission consumes it (with some inefficiency along the way). The emission rate may or may not be proportional to the energy (i.e. typical exponential decay); I found references to more complex behavior but probably exponential decay is a good enough model for visual effects.

mathematical optimization – Is there a better way to simulate quadratic cost MPC problems?

I would like to know if there is a more straightforward or a better way in terms of code length and accuracy to simulate MPC (or essentially optimal control) problems as compared to what I am doing.

The receding horizon optimal control problem I want to simulate is as follows:
$$
begin{equation}
begin{aligned}
min_{{u_0, …,u_{N-1}}} ; & sum_{p=0}^{N-1} x_p^TPx_p + x_N^TQx_N \ text{subject to}
;& x_{k+1}=Ax_k+Bu_k,; ; k = 0, 1,…,N-1 \& A_x x_k leq b_x, ;; x_k in mathbb{R}^n, ;; k = 0, 1…,N-1 \ & A_u u_k leq b_u, ;; u_k in mathbb{R}^m, ;; k = 0, 1…,N-1 \ & A_N x_N leq b_N, ;; x_N in mathbb{R}^n\ &
x_0=x(0)
end{aligned}
end{equation}
$$

What I have done is translate the above problem into a form which I can give to Mathematica’s QuadraticOptimization solver. Going as per Sec. 11.3.1 of Predictive Control for linear and hybrid systems (this book is available for free on author’s website and this is not an illegitimate copy), the problem can be re-cast as below:
$$
begin{equation}
begin{aligned}
min_{V_0} ; & V_0^TMV_0 \ text{subject to}
;& J_0V_0 leq w_0 \& begin{bmatrix} 0 & … & 0 & 1 & … & 1 end{bmatrix}V_0=x(0)
end{aligned}
end{equation}
$$

where $V_0={u_0,…,u_{N-1},x(0)}$ and the row matrix in second constraint has $mN$ zeros and $n$ ones, which basically means that the last $n$ components of $V_0$ make up the vector $x(0)$.

Is there a more straightforward way?

A bit of a background

This works but I don’t know if it gives correct results. I tried comparing my MPC results with a few examples from research papers and lecture notes and the outcome was a mixed bag. For some, results matched perfectly, for some results matched for certain time periods only and for some it threw the error of no points satisfies the constraints. It is of course an issue for a separate question on MathematicaSE but this is what prompted me to ask this question.

Thank you for your help!

phys repping – As a non-smoker, how can I simulate smoking this pipe at a LARP?

Get a Pipe-style vape

There are vapes designed to look like pipes, which avoids all your issues with smoking tobacco. You can use 0% Nicotine juice and don’t actually inhale. For some reason, Amazon is allergic to listing them, though regular vapes and supplies are all over the place. You can use 0% nicotine juice to avoid all the bad health effects of nicotine.

Sources:
A bottle of vape juice that has 0% nicotine sitting on my desk.

Here is an amazon search for 0% (nicotine) Vape Juice

Here are some links to potential vape-pipes:

vaping360.com

ebay

I am not affiliated with any of the links in my answer. I just found them on google.

Health and Safety Note: Nothing foreign you inhale is entirely healthy, even a prescribed inhaler does some damage to the lungs. However, ejuice and vaping is concretely shown to be safer than smoking, even though it does have its own problems. However, secondhand smoke is also a safety concern, and if that was prohibitive, then OP would not be attending at all.

My own experience, being someone allergic to actual cigarette smoke (I lose feeling in my lips, my eyes burn and eventually cough up blood) is that vapor from ecigs and vapes is vastly better and it doesn’t noticeably bother me. I even use my own vape w no issue.

For a less anecdotal answer, here is an excerpt from a webmd article on vaping. The article does cite health concerns, but like most official sites, they’re allergic to doing a side-by-side comparison of vapes vs cigarettes, and they mostly focus on the cheap, convenience store varieties, due to the fact that these are less healthy and it makes a scarier story.

Here is the quote: (emphasis mine):

WebMD – Is Vaping Bad For You?

E-cigarettes aren’t thought of as 100% safe, but most experts think
they’re less dangerous than cigarettes
, says Neal Benowitz, MD, a
nicotine researcher at the University of California at San Francisco.
Cigarette smoking kills almost half a million people a year in the
United States. Most of the harm comes from the thousands of chemicals
that are burned and inhaled in the smoke, he explains.

E-cigs don’t burn, so people aren’t as exposed to those toxins. A 2015
expert review from Public Health England estimated e-cigs are 95% less
harmful than the real thing.

graphs – How to simulate online matching algorithms (implementation)

I was reading about online algorithms and bipartite matching.

I found an implementation that works fine on several websites (like geeksforgeeks).
For the online version, I found this paper
https://people.eecs.berkeley.edu/~vazirani/pubs/online.pdf

But there’s one part that I don’t understand.

In this paper, they refer to U (boys) as “on-line” which means data arrive sequentially (in real-time?) not all at once

While I can see how such case if frequent in real life, I fail to understand what kind of implementation could be used to demonstrate this…

I thought of the following

  • Create a graph with girls
  • add boys and define manually the girls they like via some sort of user input (input py/cin c++)
  • solve(match) for the newly added boy according to rank(priority) and return to user input till no more girls are left or till user decides to stop?

In the online version do we assume a predefined limited number for both U and V(boys and girls)? or just for girls?

Is my suggestion correct? or did I misunderstand something?

Thanks in advance