Find the recurrence relation for the following algorithm

The question requests to find the recurrence relation of the following algorithm and solve it using the characteristic equation.

enter image description here

The recurrence equation that I found is $T(n) = 2 cdot T(2n/3) + T(n/3) + 1$ with base case $T(2) = 5$

The base case is $T(2) = 5$ because of 2 comparations in the if statement plus 3 statements in swap() function.

I’m not sure whether what I’ve tried is correct or not.
All help is appreciated!

sharepoint enterprise – InvalidOperationException: Could not get algorithm from X509AsymmetricSecurityKey to make Client Context for SP 2016 in VS 2019

I’m trying to create a context to SP 2016 site with the help of a certificate using following piece of code:

OfficeDevPnP.Core.AuthenticationManager othManager = new OfficeDevPnP.Core.AuthenticationManager();
System.Security.Cryptography.X509Certificates.X509Certificate2 cert = new System.Security.Cryptography.X509Certificates.X509Certificate2({certificate path}, {password to access Certificate});
ClientContext oContext = othManager.GetHighTrustCertificateAppOnlyAuthenticatedContext({siteurl}, {clientID}, cert, {certificate issuer id});
Web objWeb = oContext.Web;
oContext.Load(objWeb);
oContext.ExecuteQuery();
string webTitle = objWeb.Title;

However, it is running fine on Visual Studio 2015 but I have various solutions in VS 2019 so it is throwing following exception in VS 2019:

InvalidOperationException: Could not get algorithm from X509AsymmetricSecurityKey

Even tried following in App.config file of my application but issue persists:

<system.web>
<compilation debug="true" targetFramework="4.7.2"/>
<httpRuntime targetFramework="4.6.1"/>
</system.web>

Any help would be much appreciated!

java – help understanding the recurrence relation of an algorithm

I have the following code which I have to figure out the recurrence relation, but I am having a bit of a trouble understanding what the algorithm does exactly.

Interesting-Product(A(1..n))

if n = 1 then return

for i := 1 to n/3   do 
    A(i) = A(i)*A(i+(n/3))  
 
Interesting-Product(A(1..n/3))

 for j := (n/3) + 1 to   2n/3   do
      A(j) = A(j)*A(j+(n/3)) 

 Interesting-Product(A(1..n/3)) 

I’m taking an algorithms class and having trouble understanding the code.

Las Vegas algorithm for finding 00000 in bit string

Problem 1: Consider the following problem: given a binary string $w=a_1a_2cdots a_n in{0,1}^*$, decide whether $w$ contains 00000 as a substring (i.e., where $w$ contains five consecutive 0’s). There is an obvious $O(n)$-time algorithm. You will investigate the hidden constant factor, concerning the number of bits that need to be examined (i.e., the number of $a_i$‘s that need to be evaluated) to solve this problem.

  1. (10/100 points) First argue that every deterministic algorithm solving this problem needs to evaluate $Omega(n)$ bits in the worst case.
  2. (30/100 points) Prove that for a sufficiently large constant $c$, for any string $a_1 cdots a_c$ of length $c$ that does not contain 00000, there must exist two indices $i,j in {1,ldots,c}$ with $a_i=a_j=1$ and $2 leq |i-j| leq 5$.
  3. (60/100 points) Design a Las Vegas randomized algorithm solving this problem (with zero error probability) that evaluates an expected $alpha n + o(n)$ number of bits for any input, for some constant $alpha$ strictly smaller than $1$. (You do not need to optimize the constant $alpha$.)

Above is a question some friends and I have looked at to prepare for a qualifying exam and is itself from a past qualifying exam. Parts 1 and 2 are fine but what we are having troubles with is part 3. I think the idea behind this problem is to imagine that evaluating some bit $s_i$ is really expensive, so we would like to get the right answer while minimizing how many of these bits we evaluate. That said, there is some ambiguity to us about whether we (1) care about the number of distinct bits we evaluate or (2) the total number of times we evaluate any bit. Clearly if we were worried about the latter, we could just store the result of the bit evaluation and avoid doing an expensive evaluation again, but we are not sure. For now, I assume the latter case and specifically that if we want the value for the same bit twice, we assume we need to evaluate it each time and incur $2$ units to the cost we are trying to minimize.


Now I have some idea for this problem and to help explain my idea, let us consider a simple deterministic algorithm with the pseudocode written below. We will loop from the start of the string to the end, checking all meaningful 5 bit substrings. If we ever find a $1$ bit as we loop over a 5 bit substring, we know that substring cannot be $00000$ but if we start at the bit right after that $1$ bit, there might be one. Thus, we update our new starting point to the position right after that $1$ bit and then start back up again from there.

On input $s$:

  1. set $i leftarrow 1$
  2. while($i leq (n-4)$):
    • set $b leftarrow text{true}$
    • for($j = 0$ to $4$):
      • if( $a_{i+j} = 1$ ):
        • $i leftarrow (i+j+1)$
        • $b leftarrow text{false}$
        • break for loop.
    • if( $b = text{true}$ ):
  3. return FALSE

My idea for a Las Vegas algorithm was to do the same algorithm but slightly modify it by making the inner loop performed in random order, making the pseudocode now be

On input $s$:

  1. set $i leftarrow 1$
  2. while($i leq (n-4)$):
    • set $b leftarrow text{true}$
    • for($j = 0$ to $4$ in random order):
      • if( $a_{i+j} = 1$ ):
        • $i leftarrow (i+j+1)$
        • $b leftarrow text{false}$
        • break for loop.
    • if( $b = text{true}$ ):
  3. return FALSE

The positive thing going for this algorithm is that if there exists a $1$ bit in the 5 bit substring we are looking at in the inner loop, we will find it in at most $3$ loop iterations in expectation. However, if I define a bit string $s$ to be
$$s = 100001 100001 100001 cdots 100001$$
then the algorithm should require looking at $6$ bits (potentially some of the same ones multiple times) in expectation to get past each $100001$ substring. This implies on this input we will go to see the value of $n$ bits (the number of distinct bits seen may be less) in expectation before we answer the question of if $00000$ is contained in $s$. Thus, this algorithm does not seem sufficient.

Does anyone have any thoughts on how to approach this problem or think we should actually be worried about the number of distinct bits we evaluate? If yes to the latter, then I could potentially see other ways to tackle this problem.

blender – Open world style chunking algorithm help with chunk placement

Hey I have this chunking algorithm I am writing and I cant for the life of me figure out how to place the chunks so they are aligned with each other as if they were in a constant grid. The issue resides in the fact that I have a player object that never moves, instead when it “moves” the code moves all the chunks around the player. When a new chunk is moved underneath the player, the chunk loading algorithm is called and places all the nearby chunks. However I cant figure out what algorithm I need to figure out where they are all placed. They always pop up offset to the grid. How do I correct the offset so that they are aligned?

from bge import logic, events

co = logic.getCurrentController()
cameraObj = co.owner
scene = logic.getCurrentScene()
xNeighbors = (-1,0,1,-1,0,1,-1,0,1)
yNeighbors = (-1,-1,-1,0,0,0,1,1,1)
    
def keyHit(key_code): 
    
    status = logic.keyboard.events(key_code)
    
    return status == logic.KX_INPUT_ACTIVE
    
    
def loadChunks(curr):
    chunksToLoad = ()
    chunkNeighborsX = ()
    chunkNeighborsY = ()
    chunksToRemove = ()
    currX = curr % 4
    currY = int(curr/4)
    loopIndex = -1
    
    for x in xNeighbors:
        loopIndex += 1
        index = (currX + xNeighbors(loopIndex)) + 4 * (yNeighbors(loopIndex) + currY)+1
        checkX = currX + xNeighbors(loopIndex)
        checkY = currY + yNeighbors(loopIndex)
        if (checkX > 4):
            continue
        if (checkX < 0):
            continue
        if (checkY > 4):
            continue
        if (checkY < 0):
            continue
        chunksToLoad.append(index)
        chunkNeighborsX.append(xNeighbors(loopIndex))
        chunkNeighborsY.append(yNeighbors(loopIndex))
        
    # check scene for the empty object associated with that blend file, if it
    # exists, then there is no need to load the blend file and it can be skipped 
    # when loading the blend files
    

    
    for object in scene.objects:
        libToFree = 0
        loopIndex = 0
        for x in chunksToLoad:
            if (object.name == 'chunk0' + str(x)):
                #blend file that needs to be opened has already been opened
                chunksToLoad.remove(x)
                chunkNeighborsX.pop(loopIndex)
                chunkNeighborsY.pop(loopIndex)
                libToFree = 1
            loopIndex += 1
        if (libToFree == 0):
            if ('chunk0' in object.name):
                chunksToRemove.append(object.name)
                
    loopIndex = 0
    print(chunksToLoad)
    print(chunkNeighborsX)
    print(chunkNeighborsY)
    for x in chunksToLoad:
        try:
            logic.LibLoad('//chunk0' + str(x) + '.blend','Scene')
        except:
            continue
        
        chunkContents=scene.objects('chunk0'+str(x))
        print(cameraObj('x'))
        print(cameraObj('y'))
        xOffset = 16*(int(cameraObj('x')/16)+1)
        yOffset = 16*(int(cameraObj('y')/16)+1)
        chunkContents.localPosition(0) += (chunkNeighborsX(loopIndex)*16 - 2*cameraObj('x'))
        chunkContents.localPosition(1) += (chunkNeighborsY(loopIndex)*16 - 2*cameraObj('y'))
        loopIndex += 1
        
    for x in chunksToRemove:
        print(x)
        logic.LibFree(logic.expandPath('//' + x + '.blend'))
    
def main():
    
    chunkSize = 16
    chunkChange = 0
    
    if (cameraObj('initState') == 0):
        loadChunks(0)
        cameraObj('initState') = 1
    
    if (keyHit(events.AKEY)):
        cameraObj('y') = cameraObj('y') + 0.1
        for object in scene.objects:
            if (object.name != 'Camera'):
                object.localPosition(1) = object.localPosition(1) - 0.1
    
    if (keyHit(events.WKEY)):
        cameraObj('x') = cameraObj('x') + 0.1
        for object in scene.objects:
            if (object.name != 'Camera'):
                object.localPosition(0) = object.localPosition(0) - 0.1
    
    y = cameraObj('y')
    x = cameraObj('x')
    currChunk = cameraObj('currChunk')
    checkChunk = int((y)/16) * 4 + int((x)/16) + 1
    
    if (currChunk != checkChunk):
        cameraObj('currChunk') = checkChunk
        loadChunks(checkChunk - 1)

        
main()



linear algebra – Reduction to Basis algorithm as universal statement

In the book “Linear Algebra Done Right” ( Sheldon Axler ), the basis reduction algorithm ( Proof ) is as follows:

enter image description here

To practice mathematical writing, I tried to write the algorithm in a similar fashion using my own understanding in the following manner:

Let $ B = { v_1,…,v_n } $ be an arbitrary spanning set of $ V $ . For all $ v_i in B $, where $
1 leq i leq n $
, we’ll check whether $ v_i $ is a linear combination of the other vectors in $ B $, if yes then we’ll delete $ v_i $ from $ B $ else if not, keep $ v_i $ in $ B $.

Note: My proposal of the algorithm can be written logically as ( not fully formal, but it’s mostly for intuition ): $forall text{spanning set $ B={ v_1,…,v_n } $ } . forall v_i in B ( 1leq i leq n ) . P(v_i) $
.
where I denote the statement ” check whether $ v_i $ is a linear combination of the other vectors in $ B $, if yes then we’ll delete $ v_i $ from $ B $ else if not, keep $ v_i $ in $ B $ ” as $ P(v_i)$

Is my proposal of writing the algorithm correct? is it equivalent to the algorithm in the book?

Why doesn’t the assignment of processors contribute to runtime in a PRAM algorithm?

The following algorithm for computing the number of nodes in a linked list appears in this thesis on common algorithms in the PRAM model for parallel computation.

PRAM LL Node count

The author then argues that this is a $log(n)$ time algorithm (where $n$ is the number of nodes in the list) because the span of nodes covered by the “far” pointer doubles on every iteration of the innermost loop. What I don’t understand is why the author doesn’t seem to factor in the time required for the first two lines of code after the begin statement, where a processor is assigned to each node in the list. Assuming the input to the algorithm is just a pointer to the header node to the linked list, wouldn’t the assignment of processors require traversing each node in the linked list and thus take time linear in $n$? After all, each node in the linked list may be stored in some highly non-contiguous locations in memory identifiable only by following the chain of pointers from predecessor nodes in the list one at a time.

python 3.x – Algorithm For Longest Palindrome

I made an brute force algorithm to find the longest palindrome within the string. This palindrome is a substring of the input. Below is the code.


def generate_substrings(string):
    """
    avg time: O(len(string) * len(string))
    avg space: O(1) 
    
    amortized time: O(len(string) * len(string)) 
    amortized space: O(1) 
    """ 
    col = 0 
    for row in range(len(string)):
        col = row + 1 
        while col < len(string) + 1:
            yield string(row:col)
            col = col + 1 
            
def test_generate_substrings():
    assert list(generate_substrings('')) == ()
    assert list(generate_substrings('a')) == ('a')
    assert list(generate_substrings('ab')) == ('a', 'ab', 'b')

def is_palindrome(string):
    """
    avg time: O(len(string)) 
    avg space: O(len(string)) 
    
    amortized time: O(len(string)) 
    amortized space: O(len(string)) 
    """
    return string(::-1) == string 

def test_is_palindrome():
    assert is_palindrome('') == True 
    assert is_palindrome('ab') == False 
    assert is_palindrome('bb') == True 
    assert is_palindrome('abc') == False 
    assert is_palindrome('ddd') == True 
    
def find_palindrome(string):
    """
    avg time: O(len(string) * len(string)) 
    avg space: O(len(string)) 
    
    amortized time: O(len(string) * len(string)) 
    amortized space: O(len(string))
    """
    answer = ''
    vec = () 
    
    for string in generate_substrings(string):
        if is_palindrome(string):
            vec.append(string)
    
    for string in vec:
        if len(answer) <= len(string):
            answer = string 
       
    return answer 

def test_find_palindrome():
    assert find_palindrome('ab') == 'b'
    assert find_palindrome('aa') == 'aa' 

def main():
"""
entry point for test code 
"""
    test_generate_substrings() 
    test_is_palindrome() 
    test_find_palindrome() 

if __name__ == '__main__':
    main()
 
```

data structures – Need some help in designing this greedy algorithm

There are n jobs {J1, J2, . . . , Jn} with durations {a1, a2, . . . , an} ∈ N and deadlines d1, d2, . . . , dn ∈ N respectively. All the jobs are available at the beginning (that is at time 0). If a job J’
is completed at time t’ with t’ > d’, then the penalty u’ incurred for job J’ is −1; if t’ < d’, then u’ = 1 (which is an award). There’s only one machine. So, you cannot process more than one jobs simultaneously. Every job, once started, must run till it finishes. The goal is to find the schedule for these n jobs (that is a sequence according to which these n jobs will be performed) which maximizes $Sigma_{i=1}^nui$

I got this problem as an interview question and was blank all the way. I’ve been told that the optimal algorithm is greedy. I’ve tried sorting by deadlines and by durations none of which was optimal.

pseudocode – Use magic-pivot as a black-box to obtain a deterministic quick-sort algorithm with worst-case runningtime of O(nlogn)

Suppose we have an array A(1 :n) of n distinct numbers. For any element A(i), we define the rank of A(i), denoted by rank (A(i)), as the number of elements in A that are strictly smaller than A(i) plus one; so rank (A(i)) is also the correct position of A(i) in the sorted order of A. Suppose we have an algorithm magic-pivot that given any array B (1 :m) (for anym >0), returns an element B(i) such that m/3≤rank(B(i))≤2m/3 and has worst-case runtime O(n)1. Example:if B= (1,7,6,2,13,3,5,11,8), then magic-pivot(B) will return one arbitrary number among {3,5,6,7}(since sorted order of B is (1,2,3,5,6,7,8,11,13))