performance – How can I speed up my calculations with loops? Python

I wrote this code. But it works very slowly.

I’m figuring out how many times I have to run the case generator to find numbers less than or equal to inv, in this case six. I count the number of attempts until a digit <= 6 is generated. I find inv equal to 1 and repeat the loop. Until inv is 0. I will keep trying to generate six digits <= 6.

And I will repeat all this 10 ** 4 degrees again to find the arithmetic mean.

Help me speed up this code. Works extremely slowly. The solution should be without third-party modules. I would be immensely grateful. Thank!

import random

inv = 6

    def math_count(inv):
        n = 10**4
        counter = 0
        while n != 0:
            invers = inv
            count = 0
            while invers > 0:
                count += 1
                random_digit = random.randint(1, 45)
                if random_digit <= invers:
                    invers -= 1
                    counter += count
                    count = 0
    
            if invers == 0:
                n -= 1
                invers = inv
        
        return print(counter/10**4)

math_count(inv)

ruby on rails – Nested attributes, column calculations, and where statement

I am not getting the output I am looking for seems like I might be calling my variable incorrectly

Here is my controller variable for @vendor
@vendors = User.inlcudes(line_items: ({ sale: :skiswap})).where(sales: {skiswap_id: @skiswap.id})

then have

      <% @vendors.each do |vendor| %>
      <tr class="group border-t border-gray-400 hover:bg-gray-100">
        <td class="p-3"><%= vendor.company %></td>
        <td class="p-3"><%= vendor.name  %></td>
        <td class="p-3"><%= vendor.email%></td>
        <td class="p-3"><%= number_to_phone(vendor.phone_number, area_code: true)%></td>
        <td class="p-3"><%= vendor.line_items.sum(:total_price)%></td>
      <% end %>

The line
<td class="p-3"><%= vendor.line_items.sum(:total_price)%></td>
is not using the original where statement, so my total is incorrect.

the server output lines shows this

  SQL (1.7ms)  SELECT "users"."id" AS t0_r0, "users"."email" AS t0_r1, "users"."encrypted_password" AS t0_r2, "users"."reset_password_token" AS t0_r3, "users"."reset_password_sent_at" AS t0_r4, "users"."remember_created_at" AS t0_r5, "users"."confirmation_token" AS t0_r6, "users"."confirmed_at" AS t0_r7, "users"."confirmation_sent_at" AS t0_r8, "users"."unconfirmed_email" AS t0_r9, "users"."first_name" AS t0_r10, "users"."last_name" AS t0_r11, "users"."time_zone" AS t0_r12, "users"."accepted_terms_at" AS t0_r13, "users"."accepted_privacy_at" AS t0_r14, "users"."announcements_read_at" AS t0_r15, "users"."admin" AS t0_r16, "users"."created_at" AS t0_r17, "users"."updated_at" AS t0_r18, "users"."invitation_token" AS t0_r19, "users"."invitation_created_at" AS t0_r20, "users"."invitation_sent_at" AS t0_r21, "users"."invitation_accepted_at" AS t0_r22, "users"."invitation_limit" AS t0_r23, "users"."invited_by_type" AS t0_r24, "users"."invited_by_id" AS t0_r25, "users"."invitations_count" AS t0_r26, "users"."company" AS t0_r27, "users"."phone_number" AS t0_r28, "line_items"."id" AS t1_r0, "line_items"."quantity" AS t1_r1, "line_items"."price" AS t1_r2, "line_items"."total_price" AS t1_r3, "line_items"."created_at" AS t1_r4, "line_items"."updated_at" AS t1_r5, "line_items"."consigner_type" AS t1_r6, "line_items"."consigner_amount" AS t1_r7, "line_items"."skiswap_take" AS t1_r8, "line_items"."sale_id" AS t1_r9, "line_items"."tag" AS t1_r10, "line_items"."inventory_id" AS t1_r11, "sales"."id" AS t2_r0, "sales"."amount" AS t2_r1, "sales"."total_amount" AS t2_r2, "sales"."tax" AS t2_r3, "sales"."vendor_payout" AS t2_r4, "sales"."public_payout" AS t2_r5, "sales"."comments" AS t2_r6, "sales"."skiswap_id" AS t2_r7, "sales"."created_at" AS t2_r8, "sales"."updated_at" AS t2_r9, "sales"."remaining_balance" AS t2_r10, "sales"."complete" AS t2_r11, "sales"."void" AS t2_r12, "skiswaps"."id" AS t3_r0, "skiswaps"."swap_name" AS t3_r1, "skiswaps"."user_id" AS t3_r2, "skiswaps"."description" AS t3_r3, "skiswaps"."vendor_precentage" AS t3_r4, "skiswaps"."website" AS t3_r5, "skiswaps"."contact_name" AS t3_r6, "skiswaps"."contact_email" AS t3_r7, "skiswaps"."contact_number" AS t3_r8, "skiswaps"."created_at" AS t3_r9, "skiswaps"."updated_at" AS t3_r10, "skiswaps"."event_address" AS t3_r11, "skiswaps"."state" AS t3_r12, "skiswaps"."city" AS t3_r13, "skiswaps"."zip" AS t3_r14, "skiswaps"."public_precentage" AS t3_r15, "skiswaps"."tax" AS t3_r16 FROM "users" LEFT OUTER JOIN "inventories" ON "inventories"."user_id" = "users"."id" LEFT OUTER JOIN "line_items" ON "line_items"."inventory_id" = "inventories"."id" LEFT OUTER JOIN "sales" ON "sales"."id" = "line_items"."sale_id" LEFT OUTER JOIN "skiswaps" ON "skiswaps"."id" = "sales"."skiswap_id" WHERE "sales"."skiswap_id" = $1  (("skiswap_id", 3))
  ↳ app/views/pos_dashboard/index.html.erb:40
   (0.6ms)  SELECT SUM("line_items"."total_price") FROM "line_items" INNER JOIN "inventories" ON "line_items"."inventory_id" = "inventories"."id" WHERE "inventories"."user_id" = $1  (("user_id", 3))
  ↳ app/views/pos_dashboard/index.html.erb:46
  CACHE  (0.0ms)  SELECT SUM("line_items"."total_price") FROM "line_items" INNER JOIN "inventories" ON "line_items"."inventory_id" = "inventories"."id" WHERE "inventories"."user_id" = $1  (("user_id", 1))

The total amount is accurate for the Users total_price but not for total_price within that swap…

Any help would be great

calculations – Holga 120 WPC exposure times

Why would you commit any more film until you have had the first roll from any film camera developed? Get the first roll developed and see what it looks like.

If it’s massively over or under exposed, then dedicate the next roll to test shots where you systematically start at a longer exposure and then decrease the exposure time about one-half stop each frame or two. Don’t forget to write down the exposure times for each frame so you’ll know what you did for the frames that come out properly exposed.

Doing it systematically for one roll will almost certainly get you where you want to be with less “wasted” film that trying to hit the nail on the head haphazardly for several rolls before you get lucky and hit it.

EV15 at ISO 100 and f/133 is around 1/2 second. This should be proper exposure for a brightly sunlit scene. The problem with calculating an f-number from the absolute size of a physical aperture to determine exposure with a pinhole camera is that “focal length” isn’t exactly defined in the same way as it is with refractive lenses.

Having said that, most 120 cameras have more than 39.9mm between the lens board and the film plane. Does your Holga only have about 40mm (1.6 inches) between the pinhole and the film plane? That’s the focal length that figures to f/133 with a 0.3mm entrance pupil (aperture).

Another consideration is that with an aperture that narrow, an appreciable amount of the light going through it will be scattered due to the effects of diffraction. Much of that scattered light will fall outside the area of your negative.

Don’t forget that with film, any exposure longer than about one second or so will be subject to the Schwarzschild effect.

Most film manufacturers publish data sheets for each of their films that outline development times for shooting the film at different speeds as well as for developing the film when it is shot at the advertised sensitivity. They also include data regarding exposures longer than about one second (for most films) that are affected by the Schwarzschild effect, also known as reciprocity failure. Each film has different characteristics, and how much compensation must be made for long exposures can vary significantly from one film to the next.

calculations – Hola 120 WPC exposure times

Why would you commit any more film until you have had the first roll from any film camera developed? Get the first roll developed and see what it looks like.

If it’s massively over or under exposed, then dedicate the next roll to test shots where you systematically start at a longer exposure and then decrease the exposure time about one-half stop each frame or two. Don’t forget to write down the exposure times for each frame so you’ll know what you did for the frames that come out properly exposed.

Doing it systematically for one roll will almost certainly get you where you want to be with less “wasted” film that trying to hit the nail on the head haphazardly for several rolls before you get lucky and hit it.

EV15 at ISO 100 and f/133 is around 1/2 second. This should be proper exposure for a brightly sunlit scene. The problem with calculating an f-number from the absolute size of a physical aperture to determine exposure with a pinhole camera is that “focal length” isn’t exactly defined in the same way as it is with refractive lenses.

Having said that, most 120 cameras have more than 39.9mm between the lens board and the film plane. Does your Holga only have about 40mm (1.6 inches) between the pinhole and the film plane? That’s the focal length that figures to f/133 with a 0.3mm entrance pupil (aperture).

Another consideration is that with an aperture that narrow, an appreciable amount of the light going through it will be scattered due to the effects of diffraction. Much of that scattered light will fall outside the area of your negative.

Don’t forget that with film, any exposure longer than about one second or so will be subject to the Schwarzschild effect.

Most film manufacturers publish data sheets for each of their films that outline development times for shooting the film at different speeds as well as for developing the film when it is shot at the advertised sensitivity. They also include data regarding exposures longer than about one second (for most films) that are affected by the Schwarzschild effect, also known as reciprocity failure. Each film has different characteristics, and how much compensation must be made for long exposures can vary significantly from one film to the next.

Daily Average Calculations in Google Sheets

In the editable spreadsheet below I have a column of dates (column A) alongside a column of data (column D). I want to calculate the daily average of the data in column D and enter it in column F (I would also like to do the same for weekly and day-of-week average calculations, but I’d be happy with daily averages to start). Could someone provide some guidance on this? Thanks!

Sample Sheet

plotting – When do errors stop calculations?

When plotting Plot(Exp(-1/(1 - (x + 10)^2)), {x, -11, -10}), I get the following warning

enter image description here

but the Plot follows. However, if I define a piecewise function and try to plot it, I get the same warning and no Plot is produced.

Plot(Piecewise({{Exp(-1/(1 - (x + 10)^2)), {x <= -10}}, {Exp(-1), {-10 < x < 
  10}}}), {x, -11, 10})

Why is this happening? The documentation of the error suggests this can be avoided by adjusting the precision. But I’m more interested in understanding why the error stops the output in one case and not in the other.

dnd 5e – Echo Knight multiclass damage calculations

I am currently working on a new character concept for an upcoming game and was unsure about what damages the Echo Knight’s echo would actually be able to do.

For this scenario, the build will be a Minotaur Zealot Barbarian 5 / Echo Knight Fighter 3 with the Great Weapon Fighting fighting style and Great Weapon Master feat.

Unleash Incarnation

You can heighten your echo’s fury. Whenever you take the Attack action, you can make one additional melee attack from the echo’s position.

Hammering Horns

Immediately after you hit a creature with a melee attack as part of the Attack action on your turn, you can use a bonus action to attempt to shove that target with your horns. The target must be no more than one size larger than you and within 5 feet of you. Unless it succeeds on a Strength saving throw against a DC equal to 8 + your proficiency bonus + your Strength modifier, you push it up to 10 feet away from you.

Great Weapon Master

You’ve learned to put the weight of a weapon to your advantage, letting its momentum empower your strikes. You gain the following benefits: On your turn, when you score a critical hit with a melee weapon or reduce a creature to 0 hit points with one, you can make one melee weapon attack as a bonus action. Before you make a melee attack with a heavy weapon that you are proficient with, you can choose to take a -5 penalty to the attack roll. If the attack hits, you add +10 to the attack’s damage.

Great Weapon Fighting

When you roll a 1 or 2 on a damage die for an attack you make with a melee weapon that you are wielding with two hands, you can reroll the die and must use the new roll, even if the new roll is a 1 or a 2. The weapon must have the two-handed or versatile property for you to gain this benefit.

So, there’s a few parts to this question

  1. If my character is currently raging when he manifests his echo would attacks made through the echo be able to apply rage and divine fury damage?
  2. When making the attack via the echo would the attack penalty and damage increase from Great Weapon Master be applicable?
  3. When rolling the damage for the attack would Great Weapon Fighting rerolls be applicable?
  4. Would the Minotaurs Hammering Horns ability to push the target?

dnd 5e – Understanding CR calculations of these monsters from the Monster Manual

To “explain” the wolf, take a look at the Ranger class in the PHB

The Beast Master ranger (PHB, p. 93) gets to attract a CR 1/4 creature, not a CR 1/2 creature.

At 3rd level, you gain a beast companion that accompanies you on your
adventures and is trained to fight alongside you. Choose a beast that
is no larger than Medium and that has a challenge rating of 1/4 or
lower (the hawk, mastiff, and panther are examples).

It is reasonable to estimate that a design desire for a Ranger to be able to have the wolf as a companion may have informed that decision.

Don’t look for precision in the CR tool: it isn’t that precise

As you mentioned, the CR rating method is soft around the edges, but that isn’t a problem when you realize that a given party composition will experience a greater, or lesser, difficulty in dealing with a given monster or group of monsters.

There are 12 PC classes. There are dozens of subclasses.

There are 4 or 5 PCs in most parties, and the CR budgets in the DMG are complementary to a 4-person party.

To make the leap that a monster with a CR of 2 or 1 is an identical challenge to all parties, regardless of party composition and skill mix, is to overlook the wide variety of party make up that a given monster faces. I have watched this in play and seen different results based on the mix of class features, and spells, that the monsters face.

Party composition can swing encounter difficulty (rendering CR moot)

As an example, in a party with a 2nd-level Ranger (ranged attack, longbow) and a 2nd-level Warlock (eldritch blast and Repelling Blast invocation) the party can do a much better job of kiting an ogre – avoiding the ogre’s lethal attack with some frequency – thanks to it being knocked back frequently.

Toss in any other spellcaster, or a character with Magic Initiate feat, who has ray of frost, and they can do an even better job of kiting that particular monster by both knocking it back and slowing it down.

Against 3 Orcs with the Aggressive feature (MM, p. 246) …

Aggressive. As a bonus action, the orc can move up to its speed toward
a hostile creature that it can see.

… that same party isn’t kiting anyone.

CR formulation: it’s a rough approximation, at best, and is workable. You are asking for more than it has to offer.


In the interests of clarity, the term ‘kiting’ is explained here. Thank you, @Kirt.

Wpdatatables plug in remove summation sign from calculations

I am running WpDataTables. Does anyone know how to remove the summation sign?enter image description here

I need to remove it to use it dynamically utilize the number summation.

c++ – The ability for a system to report its L1 data cache size and being able to use it within constexpr expressions and compile time calculations

A proposal for the C++ standard library in regards to a system’s L1 data cache size…

This pertains to the ability of a system to report its L1 cache size to its Operating System and Compiler so that it could be used within a constexpr or compile-time context.

Considering that modern C++ is agnostic of the current architect, operating system, and compiler that it is using… There is currently no generic, modular, or portable way to achieve this…

However, when a computer goes through its booting process the information of the system and its hardware specifications should be passed to its operating system… Shouldn’t the L1 data cache size of that machine be known to the Operating System and or C++ compilers? Regardless of the architecture such as Intel, AMD, ARM, RISC-V, etc… regardless of its Operating System such as Windows, Mac, Linux, Android, etc… regardless of the current compiler MSVC, Clang, GCC, etc… Shouldn’t this information be readily available?

I think that this would be useful within the standard library in order to have something like this:

std::system::l1_cache_size;  // in kilobytes 
std::system::l1_cache_line_size; // in bytes

Where these are already computed constexpr variables that are predefined based on the system that the code will be executed on.

For example, on my system which is an Intel Quad Core Extreme. I have 32 Kilobytes of L1 Cache with 4 blocks, one for each core, and 64 bytes per cache line. Here the important values would be 32 and 64. The value 32 can always be multiplied by 1024 to convert it back to total bytes…

These would be precalculated values determined or reported by the OS. There should be a mechanism that would get these values dynamically from the system, but yet to be able to use them as a constexpr value.

For example let’s say I want to create an std::array that is based on the L1 cache size.

An example pseudo-class or struct…

#include <array>
#include <cstdlib>

// helper function
template<typeame T>
constexpr uint32_t cached_element_count() {
    return (std::system::l1_cache_size * 1024) / sizeof(T); 
}

template<typename T>
struct cache_array {
    std::array<T, cached_element_count<T>()> elements_; 
};

int main() {
    cache_array<int> a;
    cache_array<double> b;
    return 0;      
}

On my machine since my L1 Cache size is 32kb then a would be constructed with 8,192 if I target x86 since it would be a 32bit ints… and it would be constructed with 4,096 elements if I target x64 since they would be 64bit ints… and b would be constructed with 4,096 elements…

If this code was to run on a different machine and even though it was compiled on my machine. This ought to be self mutating so that these constexpr values would be that of the current system it is running on.

So behind the scenes, when the line std::system::l1_cache_size is used within a constexpr this value should not be determined by the machine it was compiled on, it should be determined by the machine it is running on, and still be known at compile time…

Would this type of mechanism be possible and if so would this be an applicable proposal for the c++ standard library?