SEO: find large file size images on a website

I often use https://webpagetest.org/ to measure page speed, which also handily reports on un-optimised images.

The downsize to this is I have to create an individual test for every page URL.

Is there a tool which would crawl a website and report which images need optimisation?

Help appreciated.
Steve

nt.number theory – Can factorization of very large numbers be aided by associating them with a series (described below) of quadratic polynomials?

My name is J. Calvin Smith. I graduated in 1979 with a Bachelor of Arts in Mathematics from Georgia College in Milledgeville, Georgia. My Federal career (1979-2012) in the US Department of Defense led me to learn, explore, and take courses in cryptologic mathematics and number theory for cryptology, which in turn led me to the problem – which I assume is still difficult – of factoring very large numbers. I came up with a series of quadratic polynomials associated with any number one is trying to factor – an example of the technique follows – and I want to find out if this is a technique with promise, one that would help make factorization be algorithmic and fast, reducing the order of magnitude of the problem.

It is most easily described by way of example. Let p = 1009 and q = 2003.

n = pq = 2021027.

In this situation, we can effortlessly factorize n: we know what p and q are. What I want to do here is take what we know and use it as a means toward a means: a way to find a way, that way being a method of factoring numbers of arbitrary size straightforwardly.

The largest integer less than the square root of n is 1421. Let us set a
new variable m to 1421.

1009 = m – 412.
2003 = m + 582.

n = m^2 + 170*m – 239784.

The determinant in the quadratic formula for this polynomial would be B^2 – 4AC = 28900 + 959136 = 988036, the square of 994. Thus the quadratic formula will give us integer roots. This follows obviously from how we constructed the quadratic: as the product of two known factors.

But since m^2 = 2019241, we also have as a polynomial for n:

n = m^2 + 1786, which cannot be factored using the quadratic formula.

But 1786 = m + 365 = 2m – 1056 = 3m – 2477 and so on. Adding each of these expressions to m^2 produces a series of quadratic polynomials, almost all of which cannot be factored into integer roots.

Let us now look at m^2 + 1786, with or without the next few polynomials in a series thus constructed, and see if we can determine/calculate ahead of time that we would be hitting the jackpot, so to speak, at the polynomial with the 170*m term? (i.e., the 171st quadratic in the series)

B C B^2-4*C
0 1786 -7144 no real square root
1 365 -1459 no real square root
2 -1056 4228 = 65^2 + 3 = 66^2 – 128.
3 -2477 9917 = 99^2 + 116 = 100^2 – 83.

In general here, Determinant is (B + 2842)^2 -8084108. (This is B^2 + 5684B – 7144, after completing the square.) What I have not yet figured out, but I suspect might be easy, is for which values of B the determinant becomes a perfect square, thus causing the quadratic formula to produce integer roots and lead us to the answer we want – the factorization. Further, will this approach scale nicely? I am hoping the discovery of the perfect-square values of the determinant’s quadratic, regardless of n = pq chosen (and in that case p and q still unknown), can be done algorithmically and easily.

memory – syspolicyd taking large amounts of CPU, inactive RAM

Recently I noticed my CPU and temp would spike about every 10 minutes, and found the culprit to be syspolicyd. It adds about 50% of my RAM as inactive as well.

enter image description here

I updated the operating system and any other software I could think of, try quitting the process in Activity Monitor, restarted the computer a number of times–nothing seems to work. Clearing the inactive RAM with FreeMemory works for a few minutes (as you can see in the graph), but the it starts up again. This is really throttling my ability to use Logic Pro and I don’t know what to do. Any thoughts?

design – Java: Splitting a large unit test class

The project (Java/Spring) I currently work on has a rather large unit-test class for one of its services : more than 1000 lines, several Nested class (one per big functionality), some tests without a Nested class, …

Finding this file rather difficult to read, and to manage (a modification in the service, and you have to take time to find the corrects tests to fix/update), I proposed to split the class to have one test file per Nested Class, each one linked to a “principal” test file, which will be managing the Mock & the tests without Nested Class. All the tests files would be put into a specific package.

I am wondering if this would be a good thing to do? or if it was something to avoid to do ?

I hope i was clear enough.
Have a nice day

How does loading a large file in memory affect thread latency in multithreaded applications?

Context: I’m developing a multi-threaded software. One thread is particularly time-sensitive. Although it will usually run in real-time, there is sporadically a significant latency. I’ve been able to tweak my software to reduce it considerably, but I’m still trying to improve. I know that some of it might be due to the OS’s background processes. (I’m running Windows 10) I’ve also yet to work on reducing the amount of shared memory, but that could not explain something like a 10ms latency. That’s more about the overall efficiency.

Question: My software has another thread that has to load audio files now and then (from an HDD), and I was wondering how much of an impact on latency it can have. Let’s say it takes 50ms to load a file. Does it means that other threads will have to wait 50ms to access the RAM bus? How much does it have to do with the programming language, the OS or the hardware architecture?

I know it can’t conceivably block everything else, right? Otherwise the computer would freeze while waiting for the operation to be done. But how granular is the operation at a low-level? Does the CPU switch between requests after each integer it receives from the controller, or after chunks of a particular size, or does it have a way of loading the whole file in one contiguous sequence without blocking other operations?

worksheet function – Need to fill large data field

I have downloaded some data that need sorting individually into date order, however the date only appears on a single title line per category, and is in text form. The only way I can see of doing it is to add the date to each line, I know I can copy and paste but there is about 4k lines so am looking for something that might speed the process up a bit.
I am fine with changing the text to a date, and deleting what currently occupies the field separately if required.
Have attached an example, any help would be appreciated
example

Is there a way to numerically solve on python a large number of uncoupled of 1st order differential equations?

I am looking for a way to numerically solve on python a large number of uncoupled of 1st order differential equations in a time-efficient manner. Is there a method you would recommend?

forms – Can a 2-column grid of info be acceptable if it mitigates an awkwardly large amount of white space?

enter image description here

I’m in the middle of a psuedo-overhaul of my company’s platform, with an emphasis on “object-level” pages (ie the page for an individual task, or individual appointment).

A problem I’m running into is that the info for many of these pages is laid out in a two-column format (see pic), which I know any competent ux designer will tell you is bad for scanning. But while combining both columns into one single left-aligned column might improve scannability, it would also leave just a gaping void of whitespace on the right portion of the page. I can’t really articulate any ux code this violates beyond just looking ridiculous and stark, but it still seems like an issue worth surfacing.

Any thoughts on this? I want to follow best practices but I also would like to preserve a sense of visual balance and appeal.

Thanks!

microservices – Service integration with large amounts of data

I am trying to assess the viability of microservices/DDD for an application I am writing, for which a particular context/service needs to respond to an action completing in another context. Whilst previously I would handle this via integration events published to a message queue, I haven’t had to deal with events which could contain large amounts of data

As a generic example. Let’s say we have an Orders and Invoicing context. When an order is placed, an invoice needs to be generated and sent out.

With those bits of information I would raise an OrderPlaced event with the order information in, for example:

public class OrderPlacedEvent
{
    public Guid Id { get; }
    public List<OrderItem> Items { get; }
    public DateTime PlacedOn { get; }
}

from the Orders context, and the Invoicing context would consume this event to generate the required invoice. This seems fairly standard but all examples found are fairly small and don’t seem to address what would happen if the order has 1000+ items in the order, and it leads me to believe that maybe integration events are only intended for small pieces of information

The ‘easiest’ way would be to just use an order ID and query the orders service to get the rest of the information, but this would add coupling between the two services which the approach is trying to remove.

Is my assumption that event data should be minimal correct? if it is, how would I (or even, is it possible to?) handle such a scenario where there are large pieces of data which another context/service needs to respond to, correctly?

csom – Upload large file from SharePoint to external API

I am developing an API project using MVC .NET to copy a SharePoint file to external system (which is not SharePoint). I am getting the stream of the file in SharePoint using below code:

ResourcePath filePath = ResourcePath.FromDecodedUrl(docURLDecoded);
SP.File newFile = context.Web.GetFileByServerRelativePath(filePath);

ClientResult<Stream> data = newFile.OpenBinaryStream();
context.Load(newFile);
context.ExecuteQuery();

File size we are going to support is 1GB. I would like to know if the above code loads the entire 1GB data in memory? When I run this code in console application I see the memory usage (from Task Manager) went more than 1 GB. Because we will have 2-3GB memory available for web API.

Or is there a better way than this?

Thank you!