## google – I cannot conquer code coverage inside Promise “then” and “catch”

I am trying to gain code coverage on my component by implementing a unit test in my spec file, as follows:

home.component.spec.ts

``````import { HomeService } from './home.service';
import { HttpClientTestingModule } from '@angular/common/http/testing';
import { ComponentFixture, fakeAsync, TestBed, tick } from '@angular/core/testing';

import { HomeComponent } from './home.component';
import { of } from 'rxjs';

describe('HomeComponent', () => {
let component: HomeComponent;
let fixture: ComponentFixture<HomeComponent>;

// stubs
const registryStub: HomeComponent = jasmine.createSpyObj('HomeComponent', ('getUserData'));
const fakeNames = {x: 1};

beforeEach(async () => {
await TestBed.configureTestingModule({
imports: (HttpClientTestingModule),
declarations: (HomeComponent)
})
.compileComponents();
});

beforeEach(() => {
fixture = TestBed.createComponent(HomeComponent);
component = fixture.componentInstance;
fixture.detectChanges();
});

it('should create', () => {
expect(component).toBeTruthy();
});

it('should navigate on promise - success', fakeAsync(() => {

spyOn(component, 'getUserData').and.returnValue(of(fakeNames));
(registryStub.getUserData as jasmine.Spy).and.returnValue(Promise.resolve(('test')));
component.getUserData();

tick();
expect(component.getUserData).toHaveBeenCalled();

// spyOn(component, 'getUserData').and.returnValue(of(fakeNames));
// component.getUserData().toPromise()
// .then((data: any) => {
//   expect(data).toEqual(fakeNames);
// });

}));

});
``````

When I run the “ng test –code-coverage” command, I check that the code gap inside the “then” and “catch” blocks are not being covered, as you might see in the illustration below:

Could anyone point me to the right direction in order to have a complete code coverage on this component?

By the way, I have a public repo for this:

I look forward to having a reply from you guys

## dnd 5e – Are there drow deities that encourage cooperation with surface elves, and possibly reunion / conquer?

Inspired by How do I encourage Drow players to not make Drizzt clones?

I’d love the concept of truly evil drow cleric adventuring on the surface and trying to cooperate with surface elves, hating their soft, cowardly ways (aka not being merciless selfish bastards). I vaguely remember there used to be a god in the drow pantheon just about that, but I can’t find any reliable source on him.

So what was this god, where is he described, and does it exist in 5e D&D?

I tagged both with general dungeons-and-dragons and dnd-5e because I want both his current status and where he came from.

## algorithms – Why sort the points acc to y coordinates in closest point divide and conquer method?

The divide and conquer strategy for closest point problem sorts the points according to x coordinates so that the median could be found. But what does sorting the strip (strip array contains all the points which are at most d perpendicular distance apart from the median line where d is the minimum distance till now) according to the y coordinates serve any purpose? Is there something that cannot be done by the already x coordinate sorted array?

## time complexity – Why is the maximum single sell profit by divide and conquer O(n log n)

The single sell profit problem is:

Given a list of prices on each day, find the maximum profit that could have been made by buying on one of the days and selling on a later day.

There is a solution with a single scan that is easy to implement and runs in $$O(n)$$. This question is not about that. It’s about the divide-and-conquer solution. Here’s my Python implementation:

``````def best_trade(prices: List(int)) -> int:
def f(prices: List(int)) -> Tuple(int, int, int):
if len(prices) < 2:
return 0, min(prices), max(prices)
else:
best_left, min_left, max_left = f(prices(: len(prices) // 2))
best_right, min_right, max_right = f(prices(len(prices) // 2 :))
return (
max(best_left, best_right, max_right - min_left),
min(min_left, min_right),
max(max_left, max_right),
)

return f(prices)(0) if prices else 0  # else 0 for the corner case where prices = ()
``````

The inner recursive function returns the maximum profit, the minimum price, and the maximum price.

This recurses by breaking the problem into two subproblems, each half the size of the original problem. As far as I can tell the combination step outside the recursion is $$O(1)$$. I’m using `max` and `min` builtins for readability, but it just compares 3 numbers, 2 numbers and 2 numbers.

If that’s correct, the running time is $$T(n) < 2 T(n/2) + O(1)$$. By the master method, this implies the algorithm has time complexity $$O(n)$$. That’s because $$a = 2$$, $$b = 2$$ and $$d = 0$$, so we’re in the case where $$a > b^d$$, so $$O(n^{log_b a}) = O(n)$$.

I can’t see any flaw in the above, but I’ve read twice that this algorithm is $$O(n log n)$$: once when this problem is discussed in Elements of Programming Interviews in Python by Aziz, Lee and Prakash (it’s problem 5.6), and once in these PDF lecture notes by Kevin Zatloukal of UW (page 23). Both these sources say the divide and conquer solution is slower than a simple scan.

Reading between the lines in both cases, they appear to be describing a divide-and-conquer implementation without an inner function that returns the best trade directly, which requires them to call `min` and `max` on the subarrays, i.e.

``````def best_trade_2(prices: List(int)) -> int:
if len(prices) < 2:
return 0
else:
best_left = best_trade_2(prices(: len(prices) // 2))
best_right = best_trade_2(prices(len(prices) // 2 :))
return max(
best_left,
best_right,
max(prices(len(prices) // 2 :)) - min(prices(: len(prices) // 2)),
)
``````

I can see here that the work done outside the recursion is $$O(n)$$ because of the `max/min` calls on the sublist, which implies the algorithm is $$O(n log n)$$.

My implementation seems fundamentally the same algorithm but it has different complexity. I feel like I’m getting work for free. What am I missing? Is my implementation actually different (and better)? Or is my analysis of my implementation wrong?

## python – Divide and Conquer Password Bruteforcer

My program brute-forces a password. The password is a string composed of a key and a four digit numeric code. The key is known so we are basically brute-forcing between 0000 through to 9999

`UoMYTrfrBFHyQXmg6gzctqAwOmw1IohZ 4143`

I updated that script I wrote to take advantage of multiprocessing in order to run faster.
The basic idea is to divide the task by the number of CPUs available.
There are two Events set up:

• `prnt_sig_found` is used by subprocesses to tell the parent if they succeed in guessing the right password.
• The parent process then uses `child_sig_term` to halt each subprocesses

My Python’s rusty and I think I made some bad choices. It would be useful to have my assumptions invalidated. 🙂

``````#!/usr/bin/env python
# coding: utf-8

import multiprocessing as mp
import socket
import time
import math
import sys
import os

class Connection:
def __init__(self, pin = 0, max_iter = 10000, sock = None):
print('initizializing socket instance ...')

self.pin = pin
self.max_iter = max_iter

self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)

def p_name(self):
return mp.current_process().name

def connect(self, host='127.0.0.1', port=30002):
print(self.p_name(), 'connecting ...', host, port)
self.sock.connect((host, port))
print(self.p_name(), 'connection successful.')

def write(self, msg):
print(self.p_name(), 'sending', msg)
self.sock.sendall(msg)

data = self.sock.recv(4096)
return data

def close(self):
try:
self.sock.shutdown(0)
self.sock.close()
except:
pass

def execute(self, child_sig_term, prnt_sig_found):
start_time = time.time()
print(self.p_name(), 'executing ...')

self.connect()

self.write('greetings !')
print(welcome_str)

while self.pin < self.max_iter:
if child_sig_term.is_set():
break

pin_str = str(self.pin).zfill(4)
message = self.password + " " + pin_str + "n" # add newline char to flush message or it doesn't get sent

self.write(message.encode())

print(self.p_name(), 'Wrong guess %s', pin_str)
else:
prnt_sig_found.set()
break

self.pin += 1
time.sleep(0.5)

end_time = time.time()
total_time = end_time - start_time
print(self.p_name(), "start: "+str(self.pin), ' end: '+str(self.max_iter), 'total_time: ', str((total_time)/60) + ' minutes')

def main():
print('main')

connections = ()
processes = ()

prnt_sig_found = mp.Event()
child_sig_term = mp.Event()

MAX_ITER_COUNT = 10000
processor_count = mp.cpu_count()

step_count = int(math.floor(MAX_ITER_COUNT / processor_count)) # math.floor returns a float in python 2
end = step_count
start = 0

print('Initial values ->', processor_count, step_count, start, end)

try:
for i in range(processor_count):
conn = Connection(pin = start, max_iter = end)
proc_name = 'BF( ' + str(start) + ' - ' + str(end) + ' )'

process = mp.Process(name=proc_name, target=conn.execute, args=(child_sig_term, prnt_sig_found))
process.daemon = True

connections.append(conn)
processes.append(process)

start = end + 1
end += start + step_count

# ensure start and end don't exceed max
if MAX_ITER_COUNT < end  : end = MAX_ITER_COUNT
if MAX_ITER_COUNT < start: start = MAX_ITER_COUNT

# start all processes
for process in processes:
process.start()

# wait for all processes to finish
# block the main program until these processes are finished
for process in processes:
process.join()

prnt_sig_found.wait()
child_sig_term.set()

except:
pass

finally:
for conn in connections:
conn.close()

for process in processes:
if process.is_alive():
process.terminate()

if __name__ == '__main__':
main()
``````

## c++ – Divide and conquer solution for finding the majority element in the given input

I am using a divide and conquer strategy to solve the majority problem. An element in said to be in the majority if repeats more than n/2 times where n is the number of elements given in the input.
Return 1 if a majority element in present, return 0 otherwise.

This is the algorithm that I am using:-
->We will keep dividing the array into half until we reach an array of size two and we will compare the two elements of each array.
->If they are the same, they are the majority element of that array and we will return their value. ->If they are not the same, we will return a special character to signify that there is no majority element.
->Moving recursively up the arrays, we will check the values we get from the two child-arrays/halves. As with the base case above, if the elements are the same, we return them, otherwise we return the special character(special character is -1 in the code below).

``````//Function to count the number of occurences of an element in the input in linear time
ll simple_count(vector<ll>&v,ll p){
ll count = 0;
for(ll i=0;i<v.size();i++){
if(v(i)==p)
count++;
}
return count;
}
//Function to find the majority element using the above algorithm
ll majorityElement(vector<ll>&v,ll left, ll right){
if(left==right)
return v(left);
if(left+1==right){
if(v(left)==v(right))
return v(left);
return -1;
}
ll mid = left+(right-left)/2;
ll p = majorityElement(v,left,mid);
ll q = majorityElement(v,mid+1,right);
if(p!=-1&&q==-1)
{
if(simple_count(v,p)>((ll)v.size()/2))
return p;
return -1;
}
else if(p==-1&&q!=-1)
{
if(simple_count(v,q)>((ll)v.size()/2))
{
return q;
}
return -1;
}
else if(p!=-1&&q!=-1){
ll p_count = simple_count(v,p);
ll q_count = simple_count(v,q);
if(p_count>q_count&&p_count>v.size()/2)
return p;
else if(q_count>p_count&&q_count>v.size()/2)
return q;
return -1;
}
else if(p==q&&p!=-1)
{
if(simple_count(v,p)>(ll)v.size()/2)
return p;
return -1;
}
else if(p==q)
return -1;
}
int main(){
ll n;
cin>>n;
vector<ll>v(n);
for(ll i=0;i<n;i++)
cin>>v(i);
ll k = majorityElement(v,0,v.size()-1);
if(k==-1)
cout<<0;
else
cout<<1;
return 0;
}
``````

The problem I am facing is that this code is giving me Time Limit Exceeded(TLE) error for a few inputs(Coursera grader does not share the inputs for which we get an error) I believe the time complexity of my code is O(nlogn). Please help me optimize the code so that this program does not give me TLE.

## Ranking – Recurrence and Runtime of Split and Conquer Bogo Flavored

Here we propose a way to reduce the Bogo Sort run time from factorial to exponential using a divide and conquer approach. This is something that we have probably all reflected on extensively.

https://en.wikipedia.org/wiki/Bogosort.
Entry: An unsorted matrix A (1 … n).
Departure: An ordered matrix.

Recall why the normal Bogo Sort runtime is O (n!).

Let's say we must guess the smallest element of an array randomly. What are our odds of guessing well? Our odds would be 1 / n, of course. Let's say we got it right! Now let's try randomly guessing the second smallest element in an array, after typing the first one to the right. What are our odds of guessing well? Our probabilities would be 1 / (n-1), of course.

$$frac {1} {n} * frac {1} {n-1} * frac {1} {n-2} ; * ; … ; * ; frac {1} {n-n + 1}$$
The expectation is n!

Let's use a Divide and Conquer strategy in Bogo Sort to improve execution time. We will use a modified merge order to accomplish this. We remember that merge sort combines two arrays recursively. Can Bogo Sort take advantage of the fact that we are merging two already ordered arrays? Let's say we must guess the smallest element of two ordered matrices randomly. We know that the smaller element of both can only be the first element of one of these two matrices. If we guess at random only from the first two smallest elements in each matrix, what are our odds of guessing correctly? Our odds would be 1/2. This sounds better than 1 / n, but let's look at an example of a recurrence tree.

Example recurrence tree:
n = 4 square brackets () represent a merge that is performed.

$$( frac {1} {2} * frac {1} {2} * frac {1} {2} * frac {1} {2})$$
$$( frac {1} {2} * frac {1} {2}) ; + ; ( frac {1} {2} * frac {1} {2})$$
$$( frac {1} {2}) + ( frac {1} {2}) + ( frac {1} {2}) + ( frac {1} {2})$$

(Depth 0) Expectation is 16
(Depth 1) The expectation is 4 + 4 = 8
(Depth 2) The expectation is 1 + 1 + 1 + 1 = 4

Note that in our base case we only need to order two values, guess between two values, a total of n times.
From the above, we see the expected execution time of D&Q Bogo Sort:
$$O (2 ^ n + 2 ^ { frac {n} {2}} + 2 ^ { frac {n} {4}} ; …)$$

I need help to confirm the execution time. My assumption originally was that
the execution time is simply O (2 ^ n). However, I think I need a log () in
there is somewhere. I'm not sure how to apply the Master's Theorem to O (2 ^ n) in my
hypothetical recurrence:

$$2T ( frac {n} {2}) + O (2 ^ n)$$

We have successfully reduced Bogo Sort from factorial runtime to exponential runtime using divide and conquer. Let me know how I can improve my analysis.

## Simple test to divide and conquer

Suppose a simple program is reading messages from a distributed cluster.
The cluster has 3 partitions, and the program has three readers each assigned one partition. Readers work in parallel.
All 3 partitions have the same data load.

If the total data load is n messages (on all partitions)So, since we have 3 readers, it is possible to divide the reading process into n / 3 parts.

The more readers we have, the faster it will be. It's clear to me, and this is pretty simple, but I can't find a mathematical proof for this.

What would be the simplest mathematical proof for this scenario?

## divide and conquer – Median of the difference matrix

Given a set of $$A = (a_i)$$ with $$n$$ elements, find the median of the
matrix (dummy) $$B = (b_ {ij})$$, where $$b_ {ij} = | a_i-a_j |$$.

The obvious solution would be to use a deterministic algorithm of linear medium time search in the construction of matrix B, which would give us a complexity of $$O (n ^ 2)$$. Is there any way (probably some divide and conquer approach along with the finding of the median linear time) that we could obtain a temporal complexity of $$O (n log n)$$?

## divide and conquer: two-peak search algorithm

No. Any algorithm to locate those two peaks will take $$Omega (N)$$ time, assuming that the only operation allowed is to compare two elements in the matrix, that is, the mode of comparison.

Otherwise, suppose algorithm $$A$$ I don't need so much time. Leave $$arr (0), arr (1), cdots, arr (N-1)$$ be the given matrix then for some $$N$$ large enough, $$A$$ you will only have verified at most $$N-3$$ items before it's over. Suppose for all the comparisons that have been made by $$A$$, you will find that $$arr (i) where $$i .

Leave $$arr (a), arr (b), arr (c)$$ be three elements that have not yet been verified, $$a . Whatever is $$A$$ done, the two peaks could be $${arr (a), arr (N-1) }$$. They can be $${arr (b), arr (N-1) }$$ too. As $$A$$ cannot distinguish these two cases, $$A$$ It cannot have determined the two peaks.

Exercise. (less than one minute) Modify the above argument to be valid in the case where a peak must have a left neighbor and a right neighbor.