virtual machines – Azure VM Scale Set: When exactly is a state change considered ‘complete’?

I’m asking because this is relevant for the autorepair grace period.
https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs#grace-period

When an instance goes through a state change operation because of a PUT, PATCH or POST action performed on the scale set (for example reimage, redeploy, update, etc.), then any repair action on that instance is performed only after waiting for the grace period. Grace period is the amount of time to allow the instance to return to healthy state. The grace period starts after the state change has completed.

We use a stock image, and then use the custom scripts extension to configure the machine. These scripts take a long time, think ~30 minutes.
I’ve seen that when the custom scripts throw an error, that the VM creation is then marked as a failure.

What’s not clear to me, is whether the run time of these custom scripts is included in the ‘state change’ or not.

Has anyone tested this, is there documentation of this somewhere?

algebra precalculus – Confused about how we scale graph axis’ to make the axis’ dimensionless.

I am trying to understand the solution to part $mathrm{(iii)}$. But, for the question I’m asking to make sense I need to include the solutions to parts $mathrm{(i)}$ and $mathrm{(ii)}$ also:

Consider a triangular lattice where the sides of the triangles have length $d$. The figure gives a choice of unit cells (dashed lines).

Triangular Lattice

$mathrm{(i)}$ Use the sides of the unit cells as the primitive lattice vectors, $boldsymbol{a}_1$ and $boldsymbol{a}_2$. Write down these vectors in Cartesian coordinates.

$mathrm{(ii)}$ Write down a pair of reciprocal space vectors $b_{1,2}$ satisfying the condition that
$a_icdot b_j = 2pidelta_{ij}$ .
(If you want to use the explicit formula in three dimensions given in the lectures,
then you should pick as $boldsymbol{a}_3$ the unit vector in the direction out of the page.)

$mathrm{(iii)}$ The reciprocal lattice vectors G are defined by $G = h_1b_1 + h_2b_2$ where $h_{1,2}$ are integers and $boldsymbol{b}_1$ and $boldsymbol{b}_2$. Sketch the lattice that is formed by the reciprocal lattice vectors $boldsymbol{G}$ of the triangular lattice.


Solutions:

$mathrm{(i)}$ The primitive lattice vectors are $boldsymbol{a}_1 = (d, 0)$ and $boldsymbol{a}_2 = left(dfrac{d}{2},dfrac{sqrt{3}d}{2}right)$.

$mathrm{(ii)}$ A choice of the primitive lattice vectors (bold arrows in
diagram) for the reciprocal lattice is $boldsymbol{b}_1=left(dfrac{2pi}{d},-dfrac{2pi}{sqrt{3}d}right)$ and $boldsymbol{b}_2=left(0,dfrac{4pi}{sqrt{3}d}right)$. Other choices are possible, such as $−boldsymbol{b}_1$ and $−boldsymbol{b}_2$.

$mathrm{(iii)}$ $boldsymbol{G} = h_1boldsymbol{b}_1 + h_2boldsymbol{b}_2$ with integers $h_{1,2}$.
The diagram shows all the $boldsymbol{G}$ vectors plotted as points in $boldsymbol{k}$-space.
All the $boldsymbol{G}$ vectors form a periodic array in reciprocal space. This ‘reciprocal lattice’ for a triangular lattice in real space is itself a triangular lattice in $boldsymbol{k}$-space.
Reciprocal lattice

When I asked my lecturer about this scaling on the $x$ and $y$ axis he just said (something like) that it is to “avoid having factors of $dfrac{2pi}{d}$ on each increment of the $x$ and $y$ axis”. This makes sense since having a dimensionless $x$-axis looks clearer than this:

unscaled x-axis

and similarly for the $y$ axis.


So I will first factor out $dfrac{2pi}{d}$ then the reciprocal lattice vectors are $boldsymbol{b}_1=left(dfrac{2pi}{d},-dfrac{2pi}{sqrt{3}d}right)=dfrac{2pi}{d}left(1,-dfrac{1}{sqrt{3}}right)$ and $boldsymbol{b}_2=left(0,dfrac{4pi}{sqrt{3}d}right)=dfrac{2pi}{d}left(0,dfrac{2}{sqrt{3}}right)$ I thought the graph axis’ should look like this:

incorrectly scaled x-axis

and similarly for the $y$-axis.


The reason I think the graph axis should read $dfrac{2pi k_x}{d}$ and not $dfrac{k_x d}{2pi}$ (in the solution) is simply because I have factored out the $dfrac{2pi}{d}$ above so that what is plotted does not depend on $dfrac{2pi}{d}$. Math is not my strong point and I just cannot figure out why the axis reads $dfrac{k_x d}{2pi}$ instead of $dfrac{2pi k_x}{d}$ (which is what it looks like it should be). Can anyone please explain what is going on here?

Thanks in advance!

Should your heuristic for an A* search algorithm be the same scale as your actual weights?

I’m a bit confused about the scale of heuristics for implementing A* search. If the total cost of travel to a node n – f(n) is equal to the actual cost of traveling to that node from the source node g(n) added to the estimated heuristic cost of travelling from that node to the target h(n) so that f(n) = g(n) + h(n) should h and g be on the same scale?

I’ve read some conflicting takes on it where one example of using a* for plane routes had g values in the range of 10-50 but h values in the range 100 – 900 but elsewhere I read that they should be the same scale to avoid changing an A* search into a greedy best first search?

Updating Azure Virtual Machine Scale Set

I have hosted a website in azure virtual machine scale set by following the below steps

  1. Create a VM and do the necessary changes/installations in iis.
  2. Create a snapshot of the VM. This ensure that the above instance can
    be used for future changes.
  3. create a disk from the snapshot.
  4. create a vm from the disk.
  5. RDP to the instance and generalize the instance for deployment
    (sysprep) Run %WINDIR%system32sysprepsysprep.exe as admin. Enter
    System Out-of-Box Experience (OOBE), Generalize check box is
    selected Shutdown Option = Shutdown
  6. Create Image (capture) from the above instance.
  7. Create VSS from the above image

Suppose their is a change in the web build , Is there a way to update the scale set without following these steps again (preferably from portal) ?

architecture – java threading model for scale up

once we have more clients. the thread pool becomes unmanageable. in our case 20K user streaming data means 20K running threads+ 20K java internal queues. my question is:

What you are calling a thread pool doesn’t sound like thread pool to me. From Wikipedia’s definition:

By maintaining a pool of threads, the model increases performance and avoids latency in execution due to frequent creation and destruction of threads for short-lived tasks.[2] The number of available threads is tuned to the computing resources available to the program, such as a parallel task queue after completion of execution.

The whole idea of a thread pool is to avoid having 20k threads. Instead you want to have just enough to keep your cores busy. Having many threads will means you have to keep all of them in memory which adds to the overhead. It might also add to the time it takes to context switch.
Since you stated that your tasks in step C do not have any blocking IO calls, you can roughly need one thread per core, that is 24×6 = 144 cores.
Since you want to read messages for each connection in order, you should assign connections to threads.
With that distribution in mind you can also reduce the number of queues to match the number of threads.

Now you have one worker shoveling messages in the ring buffer, as before.
You can keep the logic in step B as it is as well, except that it also needs to make a decision which connection goes to which queue. In the simplest case the assignment is queue_no = connectoin_no % 24.

In step C you now have worker threads that are always active, as long as there is work to do in their queue and sleep otherwise. There is no context switching involved and every single thread can use 900 MB of memory.

All of this assumes that messages are fairly even distributed on connections. If you have 5 connections that make up 90% of the traffic and 2 of them happen to be on the same thread, you might run into situation where one thread is idle and another one can’t keep up. This can be fixed, but really depends on the shape of the traffic.

Do You Sell VPS at Scale? Cloud.net Will Match Your Current Platform Pricing (if they can!)

If you sell cloud hosting with any platform that isn’t based on OnApp, and you’re paying your current platform provider more than $500/month… Cloud.net is offering to match your current pricing if you switch to their service.

As you may have heard, Cloud.net is a new “SaaS cloud platform” from OnApp. OnApp’s cloud platform is used by many well-known hosting companies as well as larger MSPs and Telcos.

Cloud.net is a new service aimed at VPS hosting providers, and it is a very different approach: starting at $50/month you can use the OnApp control panel and KVM stack to sell cloud using your own servers – or, you can use OnApp’s compute marketplace, and sell cloud without having to buy or manage any infrastructure at all. There’s a 1 week free trial too!

Cloud.net’s “match it” offer, starting at $500/m, is aimed at the larger VPS hosts – but if you’re looking for a new platform, and like the sound of running on the well-known OnApp codebase, all they’re asking you to do in the first place, is to get in touch!

You can do that at https://cloud.net/match-it – you’ll find more information about the deal there, too.

raindog308

I’m Andrew, techno polymath and long-time LowEndTalk community Moderator. My technical interests include all things Unix, perl, python, shell scripting, and relational database systems. I enjoy writing technical articles here on LowEndBox to help people get more out of their VPSes.

benchmark – Can I set scale factor to the value that is not listed in manual when using DBGEN to generate TPCH dataset?

Certain scale factors are given in DBGEN’s manual, including 1, 10, 100, 300, 1000, 3000, 10000, 30000, 100000. Manual declares that only these sf values are compliant.

-s --scale factor. TPC-H runs are only compliant when run against SF's 
                    of 1, 10, 100, 300, 1000, 3000, 10000, 30000, 100000

Can I set sf to some other values? Such as 200,230,370 etc. I have tried sf=50 and got dataset. But I do not know what is the influence using SF NOT COMPLIANT

dnd 5e – How does the size scale work?

There’s a little table in the Monster Manual, in the first chapter that explains how stats work. It also gives the space that monsters take up on the battlefield. It’s on page 6 of the book.

The spaces are as follows:

Tiny: 2.5 by 2.5ft

Small: 5 by 5ft

Medium: 5 by 5ft

Large: 10 by 10ft

Huge: 15 by 15ft

Gargantuan: 20 by 20ft or larger.

In the case of Gargantuan, the monster’s description might contain more information about how big it really is. The Tarrasque for example is described as being “70 feet long and 50 feet wide” and so would probably require something like a 100 by 100ft space to fight comfortably.

This same table is also in the Player’s Handbook on page 191 along with a bit of extra info on what “Size/Space” actually mean.

This is, of course, information for how big the monsters are on a battlemap. For the actual physical size of a specific creature, you’d have to check their description. It might give you some info, or if not, you’ll have to make a ruling on it (maybe by looking at something comparably sized that does have a listed size.)

surveys – Alternatives to System Usability Scale (SUS)

Regarding the question, “In particular…is there anything which produces somewhat similarly reliable results with fewer questions?”

You should take a look at the published research on the Usability Metric for User Experience, published by Finstad (2010). The UMUX has four items and has typically been found to have desirable psychometric properties and to have scores that correlate highly with the SUS and, in some studies, have also produced scores that track very closely with SUS scores in magnitude.

In 2013, Lewis, Utesch, and Maher took two of the UMUX items to produce the UMUX-LITE. According to the abstract of that CHI paper, “In this paper we present the UMUX-LITE, a two-item questionnaire based on the Usability Metric for User Experience (UMUX). The UMUX-LITE items are “This system’s capabilities meet my requirements” and “This system is easy to use.” Data from two independent surveys demonstrated adequate psychometric quality of the questionnaire. Estimates of reliability were .82 and .83 – excellent for a two-item instrument. Concurrent validity was also high, with significant correlation with the SUS (.81, .81) and with likelihood-to-recommend (LTR) scores (.74, .73). The scores were sensitive to respondents’ frequency-of-use. UMUX-LITE score means were slightly lower than those for the SUS, but easily adjusted using linear regression to match the SUS scores. Due to its parsimony (two items), reliability, validity, structural basis (usefulness and usability) and, after applying the corrective regression formula, its correspondence to SUS scores, the UMUX-LITE appears to be a promising alternative to the SUS when it is not desirable to use a 10-item instrument.”

There have been additional studies of the UMUX-LITE since then. Here is a list of papers to read on the topic:

Borsci, S., Federici, S., Bacci, S., Gnaldi, M., & Bartolucci, F. (2015). Assessing user satisfaction in the era of user experience: Comparison of the SUS, UMUX and UMUX-LITE as a function of product experience. International Journal of Human-Computer Interaction, 31, 484-495.

Finstad, K. (2010). The usability metric for user experience. Interacting with Computers, 22, 323-327.

Finstad, K. (2013). Response to commentaries on “The Usability Metric for User Experience”. Interacting with Computers, 25, 327-330.

Lewis, J. R. (2013). Critical review of “The Usability Metric for User Experience”. Interacting with Computers, 25, 320-324.

Lewis, J. R., Utesch, B. S., & Maher, D. E. (2013). UMUX-LITE – When there’s no time for the SUS. In Proceedings of CHI 2013 (pp. 2099-2102). Paris, France: Association for Computing Machinery.

Lewis, J. R., Utesch, B. S., & Maher, D. E. (2015). Measuring perceived usability: The SUS, UMUX-LITE, and AltUsability. International Journal of Human-Computer Interaction, 31, 496-505.

unity – Retain Position and Scale Values of Render Texture Canvas Elements to another Canvas

Now the title might not be able to express the entirety of the question but here’s my problem. In my main scene I have a camera and a canvas in overlay mode, let’s call this “View A”. In this view is a crosshair which position and size I want to manipulate depending on elements inside the canvas of “View B”. “View B” is located in the main scene as a render texture it contains a camera and a canvas in camera space with another crosshair. Now what I want to do is the following: I’m trying to set the position and size of the crosshair in “View A” to practically overlap with the values of the crosshair in “View B”, i.e. if the crosshair in “View B” is in the lower left corner of its screen I’d like the crosshair in “View A” to be in the same position with the same size but in the canvas of “View A”.

enter image description here

What I’m currently capable of is setting the crosshair in “View A” to the same location as the crosshair in “View B” but only if the crosshair in “View B” is in its origin, as all I have to do in that case is to get the origin of the render texture quad in the screen space of “View A” canvas. I do this in the following way:

RectTransformUtility.ScreenPointToLocalPointInRectangle(CanvasTransform, CameraComponent.WorldToScreenPoint(RenderTextureQuadTransform.position), null, out Vector2 localPoint);
CursorTransform.localPosition = localPoint;

But now I’m stuck in figuring out how to get a scale factor that makes the crosshair in “View A” and “View B” look the same size and how to adjust the position of the crosshair in “View A” depending on the position of the crosshair in “View B”.