dnd 5e: does a monk decide to use a slow fall after the damage is rolled?

On the fourth level, the monks get the Slow Fall class feature:

Slow fall

Starting at the 4th level, you can use your reaction when you fall to reduce any fall damage you receive by an amount equal to five times your monk level.

My question is, does a monk decide to use his reaction to the slow fall before or after the fall damage is rolled?

Usually, when slow fall opportunities have arisen for me, it doesn't matter, I have nothing else to use a reaction at that time, so I will always use it even to deny a single damage. But recently a situation arose in which I wanted to save the reaction for something else, but whether it was worth it or not the reaction depended on how much damage I received from the fall.

My first inclination was that the monk must decide before throwing the fall damage, because other similar abilities that affect the throws explicitly say that they can be used after the throw is made, but before the result is determined and Slow Fall it do not says that.

But in thinking more, I am not sure that it is correct, because this does not affect whether a result is a success or not. I imagine a fall that lasted two turns, a monk presumably cannot use the Slow Fall during the first turn and expect the damage reduction to continue until the second turn (when the fall ends), so it makes more sense than what You're reacting is the end of the fall when damage is inflicted, not the fall itself.

Very slow data transfer between two internal volumes in Synology NAS

I tried both File Copy within File Station, Move to and Copy to had the same results. I try to change the location of the shared folder to a different volume. The results were the same.

Even with several GB files, (Seq Write and not 4K Small Files), it reaches a maximum of 18MB / s. That is even slower than my USB 2.0 HDD speed. For something that is internally transferred under the same NAS between two SATA 3 Gbps hard drives that should be able to operate within 150-200 MB / s

Anyone have any idea why that is the case. Here is an image of my transfer speed.

Synology Slow Transfer

This is No reconstruction, No expansion, No encryption, No indexing of files, No speed limit in File Station. This is in an old 212J, but the CPU speed shouldn't make much difference in file transfer. And the CPU speed was not maximized during the transfer.

Could someone provide a technical explanation of why this happens?

Nikon – Slow shutter speed in aperture mode

You said "all night," which looks like it was too dark for your setup. The ways to obtain a faster shutter speed are:

  1. Enter a brighter area with enough light, where automation can improve. Photography is hard without enough light. One way to provide more light is to use the flash.

  2. Open the aperture and / or increase the ISO for faster shutter operation. If you now see a shutter speed of approximately 1 second, then it seems that you need at least 5 or 6 more stops, so that it is still slow, but perhaps adequate.

  3. You can use S or M mode to directly set a faster shutter speed. Then, the automation will increase the automatic ISO, and / or mode A will open the opening further, if you still have room to go, can you open more?

Frankly, what you really need to know is exposure to the camera. In Google searches, this is often called an exposure triangle, which is not a big name (there is no triangle, there are only three factors), but it is an extremely important idea to know something about the use of the camera. This is how the shutter speed, aperture and ISO combinations work together to provide exposure, but specifically, the settings you need for a situation, such as stopping movement or increasing depth of field. You can find a lot about that topic on Google to find out about the exhibition. This is the first thing a photographer has to learn.

Slow January sales | Web Hosting Talk

"https://www.webhostingtalk.com/""https://www.webhostingtalk.com/" Digital business services: domain names, shared hosting, OpenVZ VPS, KVM VDS and dedicated servers
"https://www.webhostingtalk.com/""https://www.webhostingtalk.com/" 24×7 support | Live Help | 99.9% uptime | Hosting since 2016
"https://www.webhostingtalk.com/""https://www.webhostingtalk.com/" Hosting promotion codes

seo – How can I get rid of these outbound links that slow down my YSlow score?

I am trying to make my website faster and GTMetrix tells me that (among other things) these links are lowering my YSlow score because browser caching is not enough. All are third-party links and, although I have deactivated and uninstalled the add-ons from which they originated, they continue to slow down my site since I cannot establish an expiration on them due to their outgoing nature. I have browsed through cPanel and all index.php, I have uninstalled them from Google Tag Manager and they still prevail. That I have to do? Where can I find them or how can I set a longer expiration on them? One of them I need to remove it because it is blocking the first load of my site. When I look at the Chrome item inspector, the links appear within a script in the index of the page.

Those are the links: Take advantage of browser caching for the following cached resources:

https://serve.albacross.com/track.js (expiration not specified)
https://js.hs-scripts.com/4992870.js (1 minute)
https://js.hs-analytics.net/analytics/1579265100000/4992870.js (5 minutes)
https://www.google.com/recaptcha/api.js?render=6LeTjcEUAAAAAGHEgVExfcfx9p8ABN9Lck5wv9wa&ver=3.0 (5 minutes)
https://www.google.com/recaptcha/api2/webworker.js?hl=en&v=A1Aard-wURuGsXRGA7JMOqVO (5 minutes)
https://js.hsadspixel.net/fb.js (10 minutes)
https://www.googletagmanager.com/gtm.js?id=GTM-KDR5T9R (15 minutes)
https://www.google-analytics.com/plugins/ua/linkid.js (1 hour)
https://www.google-analytics.com/analytics.js (2 hours)
https://snap.licdn.com/li.lms-analytics/insight.min.js (8 hours 47 minutes)
https://www.linkedinbranding.es/cdn-cgi/scripts/5c5dd728/cloudflare-static/email-decode.min.js (2 days)

I have my htaccess configured so that all files are at least 1 month long, I think the lowest expiration was 2 weeks, but I can't access those links.

sql – Slow execution subquery

This query runs very slowly and I wonder if it can be improved.

We have an Access database divided into front-end / back end with around 50 users. BE is in a network folder and FE is on users' hard drives.

The data entered in the FE is stored in FE tables until the user has finished entering all the data required for a record. Then they click on a button to send the data to the BE at once, to identical tables as in the FE. In the sql below the table BE has the suffix & # 39; _Share & # 39 ;.

The table contains 2 keys: QuoteID and OptionID. There are one to many between the two, for example:

QuoteID   OptionID

1234      1
1234      2

3333      1
3333      2
3333      3

As the user works, he is creating data for new options to go with existing quotes. What the code is doing is verifying if a QuoteID in the BE already has an OptionID created by the user in the FE, otherwise, the data for that OptionID is added to the BE.

INSERT INTO T_Option_Category_Benefits_Share
SELECT T_Option_Category_Benefits.*, * 
FROM T_Option_Category_Benefits 
WHERE (((T_Option_Category_Benefits.QuoteID)=1971) 
AND ((T_Option_Category_Benefits.OptionID) 
NOT IN (SELECT T_Option_Category_Benefits_Share.OptionID FROM T_Option_Category_Benefits_Share WHERE T_Option_Category_Benefits_Share.QuoteID=1971)));

The key fields are not indexed. The BE table contains 18 columns with around 100k rows. The network is generally quite slow at peak times. We are using Office 365 in Windows 10.

Computer architecture: which of these devices could slow down the processor?

I have a test question.

What devices inside the processor are used to accelerate work indirectly
that is, is the program not executing a code for that device?

Possible answers: DRAM | Cache | Pipeline | GPU | RAM | ARM | Stack | Fpu

I think we can immediately say that DRAM, GPU and RAM are incorrect selections, because they are not inside the CPU, they are different parts of the computer.
The battery is also in RAM, not in the CPU.
So the left answers are cache, pipe, arm and fpu?
I'm not sure about the floating point number either.

postgresql: slow scan of bitmap heap in group query / count in large table

I run an application with a PostgreSQL 10.10 database

I am optimizing a lot of queries in our application, with excellent results so far, but there is a specific query that takes approximately 1 minute if it is not attended from the PG shared buffers, and I really do not know how to optimize it drastically.

The query selects around 100,000 rows from a table that contains approximately 35 million rows in total, meets the criteria for which an index exists, then groups them by MONTH from a date and time field. The query is generated by ankane group date Ruby gem and looks like this:

SELECT
  COUNT(*) AS count_all,
  (
    DATE_TRUNC(
      'month',
      ("ahoy_events"."time" :: timestamptz) AT TIME ZONE 'America/Chicago'
    )
  ) AT TIME ZONE 'America/Chicago' AS date_trunc_month_ahoy_events_time_timestamptz_at_time_zone_amer
FROM "ahoy_events"
WHERE
  "ahoy_events"."merchant_id" = 5081
  AND "ahoy_events"."name" = 3
  AND "ahoy_events"."time" > '2019-01-15 21:31:54.794496'
  AND ("ahoy_events"."time" IS NOT NULL)
GROUP BY
  (
    DATE_TRUNC(
      'month',
      ("ahoy_events"."time" :: timestamptz) AT TIME ZONE 'America/Chicago'
    )
  ) AT TIME ZONE 'America/Chicago';

The table itself looks like this:

CREATE TABLE public.ahoy_events (
    id bigint NOT NULL,
    visit_id integer,
    person_id integer,
    name integer NOT NULL,
    properties jsonb,
    "time" timestamp without time zone,
    deprecated_semantic_type character varying,
    semantic_type_id bigint,
    product_id bigint,
    merchant_id bigint
);
ALTER TABLE ONLY public.ahoy_events ADD CONSTRAINT ahoy_events_pkey PRIMARY KEY (id);
ALTER TABLE ONLY public.ahoy_events ADD CONSTRAINT fk_rails_33ad087eb5 FOREIGN KEY (product_id) REFERENCES public.products(id);
ALTER TABLE ONLY public.ahoy_events ADD CONSTRAINT fk_rails_de0839b608 FOREIGN KEY (semantic_type_id) REFERENCES public.semantic_types(id);

CREATE INDEX index_ahoy_events_on_merchant_id ON public.ahoy_events USING btree (merchant_id);
CREATE INDEX index_ahoy_events_on_merchant_id_and_time ON public.ahoy_events USING btree (merchant_id, "time");
CREATE INDEX index_ahoy_events_on_merchant_id_and_name_and_time ON public.ahoy_events USING btree (merchant_id, name, "time");

CREATE INDEX index_ahoy_events_on_person_id ON public.ahoy_events USING btree (person_id);
CREATE INDEX index_ahoy_events_on_person_id_and_name ON public.ahoy_events USING btree (person_id, name);

CREATE INDEX index_ahoy_events_on_product_id_and_name ON public.ahoy_events USING btree (product_id, name);

CREATE INDEX index_ahoy_events_on_semantic_type_id ON public.ahoy_events USING btree (semantic_type_id);

CREATE INDEX index_ahoy_events_on_url ON public.ahoy_events USING gin (((properties ->> 'url'::text)) public.gin_trgm_ops);

CREATE INDEX index_ahoy_events_on_visit_id ON public.ahoy_events USING btree (visit_id);
CREATE INDEX index_ahoy_events_on_visit_id_and_name ON public.ahoy_events USING btree (visit_id, name);

When I EXPLAIN this query, this is what I get:

 HashAggregate  (cost=174126.69..174652.97 rows=105256 width=16) (actual time=46752.289..46752.814 rows=13 loops=1)
   Output: count(*), (timezone('America/Chicago'::text, date_trunc('month'::text, timezone('America/Chicago'::text, ("time")::timestamp with time zone))))
   Group Key: timezone('America/Chicago'::text, date_trunc('month'::text, timezone('America/Chicago'::text, (ahoy_events."time")::timestamp with time zone)))
   Buffers: shared hit=2729 read=86382
   I/O Timings: read=44671.925
   ->  Bitmap Heap Scan on public.ahoy_events  (cost=1365.14..174021.43 rows=105256 width=8) (actual time=80.374..46656.883 rows=98175 loops=1)
         Output: timezone('America/Chicago'::text, date_trunc('month'::text, timezone('America/Chicago'::text, ("time")::timestamp with time zone)))
         Recheck Cond: ((ahoy_events.merchant_id = 1923) AND (ahoy_events.name = 3) AND (ahoy_events."time" > '2019-01-15 21:31:54.794496'::timestamp without time zone) AND (ahoy_events."time" IS NOT NULL))
         Heap Blocks: exact=88624
         Buffers: shared hit=2729 read=86382
         I/O Timings: read=44671.925
         ->  Bitmap Index Scan on index_ahoy_events_on_merchant_id_and_name_and_time  (cost=0.00..1359.88 rows=105256 width=0) (actual time=62.504..62.504 rows=98175 loops=1)
               Index Cond: ((ahoy_events.merchant_id = 1923) AND (ahoy_events.name = 3) AND (ahoy_events."time" > '2019-01-15 21:31:54.794496'::timestamp without time zone) AND (ahoy_events."time" IS NOT NULL))
               Buffers: shared hit=1 read=486
               I/O Timings: read=39.374
 Planning time: 0.217 ms
 Execution time: 46755.851 ms

Now, if I do it again immediately after, it runs in less than 300 msec, with the impact of the shared buffer as the only difference. It seems that the IO is the bottleneck of the bitmap scan here, but I'm a bit lost for what I can do about it.

Any ideas?

EDIT: Some things worth mentioning:

  • In the entire table, there are (currently) about 5000 different merchant IDs associated with this table
  • the name the field contains a required value between 0 and 7
  • the America/Chicago The time zone is added procedurally and changes from merchant to merchant.

Optimization: what is the best way to host a site with many images without it being too slow?

I have a friend who wants me to help them with their WordPress website that has about 150 images for their gallery. Your site is quite slow even after optimizing the images using the ImageOptim application. They are using SiteGround as their web hosting provider, but they are only using the StartUp package because it is the most affordable. I know that the site is loading slowly due to the images, but I am not sure which is the best approach to try to accelerate it without paying a more expensive hosting package.

I'm thinking that maybe the best way to do this would be to just put your images in Google Images and use them as your gallery, or maybe an Instagram feed like Smash Balloon.

What would you all suggest?

Thank you