algorithms – Maximum Subarray Problem – Analyzing best case, worst case, and average case time complexity big o

New to the board, if this is the wrong section I apologize and I will delete it. Will be helpful to be provided correct exchange to guide me through this process of learning.

If you have a given an array such as A(1...n) of numeric values (can be positive, zero, and negative) how may you determine the subarray A(i...j) (1≤ i ≤ j ≤ n) where the sum of elements is maximum overall (subvectors). Regarding the brute force algorithm below, how do you go about analyzing its best case, worst case, and average-case time complexity in terms of a polynomial of n and the asymptotic notation of ɵ. How would you even show steps? Without building out the algorithm?

Thanks in advance.

// PSEUDOCODE
// BRUTE-FORCE-FIND-MAXIMUM-SUBARRAY(A)
n = A.length
max-sum = -∞
for l = 1 to n
    sum = 0
    for h = l to n
        sum = sum + A(h)
        if sum > max-sum
            max-sum = sum
            low = l
            high = h
return (low, high) # No, return of MAX-HIGH

Note: New to the forum, not sure if this is the correct exchange. But I am referring to and referencing problems from https://walkccc.me/CLRS/Chap04/4.1/.

algorithms – Given a unit vector $xinmathbb R^d$, what is the worst possible within-cluster sum of squares for 2-means clustering?

This is a question I originally posted to math.stackexchange.com but it didn’t attract any answers, and I was wondering if someone here can help.


Consider a unit vector $xinmathbb R^d$ ($|x|_2=1$), and a $k$-means clustering of it for $k=2$.

How big can the within-cluster sum of squares get?

Formally:

How to upper bound
$$
max_{|x| = 1}min_{mu_1,mu_2inmathbb R} left(sum_{i=1}^dminleft{left(x_i-mu_1right)^2,left(x_i-mu_2right)^2right}right)quad ?
$$


A trivial bound is $1$, but I suspect a much tighter bound exists.


It’s easy to show that this quantity is at least $1/4$, by considering $x$ for which third of its coordinates are proportional to $1$, third to $-1$, and third are zeros (e.g., $x=(1/sqrt 2, -1/sqrt 2, 0)$ if $d=3$).

[SkyNetHosting] Worst Experience 1/10 (Never Recommended)

I’m using Skynet Hosting for more than a month now. I bought their Corporate VIP Reseller Server Account. The loading speed is pretty bad, a… | Read the rest of https://www.webhostingtalk.com/showthread.php?t=1840951&goto=newpost

real analysis – Is the harmonic series the worst?

It is well-known that the harmonic series is not summable. In some sense this means that it takes a lot of rather large values.

We define the operator $F_{varepsilon}: ell^{infty}(mathbb N) rightarrow (0,infty)$ by $$F_{varepsilon}(x) = sum_{i=1}^{infty} 2^{-varepsilon vert x_i vert^{-1}} text{ for }varepsilon>0.$$

Now consider a positive summable sequence $x$ and the harmonic sequence $(1/n)_n$. Intuitively, the slow decay of the harmonic series should imply that it converges slower than anything summable (for most of it).

Therefore, I ask: Is it true that for any positive summable sequence $x$

$$limsup_{varepsilon downarrow 0} frac{F_{varepsilon}(x)}{F_{varepsilon}((1/n))} le 1?$$

which algorithm has the worst time complexity in average

a] Shell Sort
b] Insertion Sort
c] Selection Sort
d] Bubble Sort

algorithms – Ω(f(x)) and worst case analysis

I’m currently reading The Algorithm Design Manual by Steven S. Skiena as my first book to algorithms.

Something in the asymptotic part is kinda of confusing to me.

Proving the Theta

The analysis above gives a quadratic-time upper bound on the running time of
this simple pattern matching algorithm. To prove the theta, we must show an
example where it actually does take Ω(mn) time.

Consider what happens when the text t = “aaaa . . . aaaa” is a string of n
a’s, and the pattern p = “aaaa . . . aaab” is a string of m − 1 a’s followed by a b.
Wherever the pattern is positioned on the text, the while loop will successfully
match the first m − 1 characters before failing on the last one. There are
n − m + 1 possible positions where p can sit on t without overhanging the end,
so the running time is:

(n − m + 1)(m) = mn − m 2 + m = Ω(mn)

This example is clearly meant to be the worst possible running time of the algorithm, however, instead of O(mn), Ω(mn) has been used.

Isn’t Ω the lower bound of the algorithm, meaning for big enough n(s) the algorithm cannot perform better than this?

If it is then why is Ω used to show the worst performance, shouldn’t Big Oh be used instead?

Any help would be much appreciated.

heaps – How to derive the worst case time complexity of Heapify algorithm?

I would like to know how to derive the time complexity for the Heapify Algorithm for Heap Data Structure.

I am asking this question in the light of the book “Fundamentals of Computer Algorithms” by Ellis Horowits. I am adding some screenshots of the algorithm as well as the derivation given in the book.

Algorithm:

Algorithm for Heapify()

Algorithm for Adjust()

Derivation for worst case complexity:

enter image description here

I understood the first part and last part of this calculation but I cannot figure out how 2^(i-1)*(k-i) changed into i*2^(k-i-1).

All the derivations I can find in the internet takes a different approach by considering the height of the tree differently. I know that approach also leads to the same answer but I would like to know about this approach.

You might need the following information:

2^k-1 = n or approximately 2^k = n, where k is the number of levels, starting from the root node and the level of root is 1 (not 0) and n is the number of nodes.

Also the worst case time complexity of the Adjust() function is proportional to the height of the sub-tree it is called, that is O(log n), where n is the total number of elements in the sub-tree.

security – When a BIP44 XPUB and one of its descendant keys get leaked, what’s the worst that can happen?

I understand that BIP44 has one edge case vulnerability where if a hacker gets his hands on an Xpub and a private key from its descendants, he can compute the Xprv pair of the original Xpub and therefor get access to every single private key of its descendants.

I’m trying to implement a system where Xpub can be shared without risking too much security and wanted to confirm my understanding. Here’s the situation:

  1. Let’s say Alice has a wallet with multiple accounts, for example m/44'/0'/0', m/44'/0'/1', m/44'/0'/2', and so on.
  2. Alice shares just one Xpub at path m/44'/0'/0' with Bob.
  3. Bob can derive the descendant public key tree with paths such as m/44'/0'/0'/0/0, m/44'/0'/0'/0/1, m/44'/0'/0'/0/2, and so on.
  4. Bob ONLY has access to the derived PUBLIC KEYS at above paths.
  5. For some reason, Alice’s PRIVATE KEY at path m/44'/0'/0'/0/2 is leaked.

In this case, is the worst case scenario that Alice gets compromised up to m/44'/0'/0' only, and her other key trees m/44'/0'/1', m/44'/0'/2', and so on are safe? (Meaning her private keys like m/44/0'/1'/0/0, m/44/0'/2'/0/2, are not affected by the leak)

usability study – Is a repetitive, three seconds response time the absolute worst?

My very first IT manager back in the 90s once stated that the absolute worst repetitive response time an application can have is three seconds. His argument was that this is long enough to be a significant annoyance, and too short to get a (cognitive) break. So if you want to design a system for maximum frustration, make every action have an average response time of three seconds (and allow for some variation too, just to make it unpredictable as well).

This was my managers anecdotal input, but I have always thought of it as valid. And it seems to make sense given Jakob Nielsen’s thoughts on the matter.

Is there any research to back up (or invalidate) the claim?

OVH.com is a scam stay away the worst review for them

So OVH send me a DMCA for no reason from someone claiming i use his pics, i told them i sorted the issue and they just suspend me now i cant… | Read the rest of https://www.webhostingtalk.com/showthread.php?t=1823059&goto=newpost