## functions – why this simple code is not giving the right answer under Compile

I have two sets of inputs that are to be fed to a simple function. The inputs are essentially a set of triangles and the normal for each triangle. The function will compute the gradient about the points (shown in blue). The notebook can be downloaded from the following `Dropbox` Link

https://www.dropbox.com/s/yas32nfccd2dzj4/debug%20code.nb?dl=0 The two sets of triangles are supposed to be similar (only separated by a translation in space). I compute the area gradient about point `pt1`

`````` grad1 = Block({ptTri, normal, cross, target, facept,
openS = {0., 0., 0.}, closedS = {0., 0., 0.}, source = pt1},
Do(
ptTri = opentr1((i));
normal = normOpentr1((i));
cross = If(ptTri((1)) == source,
{target, facept} = {ptTri((2)), ptTri((-1))};
Cross(normal, facept - target),
{target, facept} = {ptTri((1)), ptTri((-1))};
Cross(normal, target - facept)
);
openS += (0.5*cross), {i, 1, Length@normOpentr1});

Do(
ptTri = closedtri1((j));
normal = normClosedtr1((j));
cross = If(ptTri((1)) == source,
{target, facept} = {ptTri((2)), ptTri((-1))};
Cross(normal, facept - target),
{target, facept} = {ptTri((1)), ptTri((-1))};
Cross(normal, target - facept)
);
closedS += (0.5*cross), {j, 1, Length@normClosedtr1});
0.7*closedS + 1*openS
)

(*{0.110728, 0.0466838, 0.752509}*)
``````

Likewise if I compute the area gradient about the second point `pt2` I get the same answer.

``````grad2 = Block({ptTri, normal, cross, target, facept,
openS = {0., 0., 0.}, closedS = {0., 0., 0.}, source = pt2},
Do(
ptTri = opentr2((i));
normal = normOpentr2((i));
cross = If(ptTri((1)) == source,
{target, facept} = {ptTri((2)), ptTri((-1))};
Cross(normal, facept - target),
{target, facept} = {ptTri((1)), ptTri((-1))};
Cross(normal, target - facept)
);
openS += (0.5*cross);, {i, 1, Length@normOpentr2});
Do(
ptTri = closedtr2((j));
normal = normClosedtr2((j));
cross = If(ptTri((1)) == source,
{target, facept} = {ptTri((2)), ptTri((-1))};
Cross(normal, facept - target),
{target, facept} = {ptTri((1)), ptTri((-1))};
Cross(normal, target - facept)
);
closedS += (0.5*cross),{j, 1, Length@normClosedtr2});
0.7*closedS + 1*openS
)

(*{0.110728, 0.0466838, 0.752509}*)
``````

The two answers are quite similar as the difference between `grad1` and `grad2` is `{1.83187*10^-15, -5.55112*10^-16,4.44089*10^-16}`

HOWEVER

the same code when under `Compile` does not give the same answers for the two datasets.

``````With({epcc = 0.7, epco = 1.},
Compile({{point, _Real, 1}, {opentr, _Real, 3}, {normalO, _Real,2}, {closedtr, _Real, 3}, {normalC,
_Real, 2}},
Block({ptTri, source = point, normal, target, facept, cross,
openS = {0., 0., 0.}, closedS = {0., 0., 0.}, nO = normalO,
nC = normalC, OT = opentr, CT = closedtr},
Do(
ptTri = OT((i));
normal = nO((i));
cross = If(ptTri((1)) == source,
{target, facept} = {ptTri((2)), ptTri((-1))};
Cross(normal, facept - target),
{target, facept} = {ptTri((1)), ptTri((-1))};
Cross(normal, target - facept)
);
openS += (0.5*cross);
, {i, 1, Length@nO});
Do(
ptTri = CT((j));
normal = nC((j));
cross = If(ptTri((1)) == source,
{target, facept} = {ptTri((2)), ptTri((-1))};
Cross(normal, facept - target),
{target, facept} = {ptTri((1)), ptTri((-1))};
Cross(normal, target - facept)
);
closedS += (0.5*cross);
, {j, 1, Length@nC});

epcc*closedS + epco*openS
), CompilationTarget -> "C")
);
``````

Though the structure of the code is the same I guess, the outputs are completely different.

`````` surfaceGradFn2(pt1, opentr1, normOpentr1, closedtri1, normClosedtr1)
(*{-0.0157582, 0.386426, 0.582118} -> this answer is now different but I gave the same inputs as before *)

(* {0.110728, 0.0466838, 0.752509} -> answer is same as before with the same inputs *)
``````

What am I doing wrong here? Why does not the compiled code work while the non-compiled version works? I wish to have the compiled version of the code working.

also, please ignore what the code intends to do. If you may, kindly look at why the compiled output is different.Thanks and grateful for your help.

## calculus – I dont seem to be getting the answer of this question without using L’hospitals rule?

lim x–>0

(a^tanx – a^sinx)/tanx-sinx

We weren’t supposed to do this using L’hospitals rule

So in the beginning, I added and subtractd 1 from the numerator the get into a standard limit form((a^x-1)/x). From then on, I got a string of standard limits but it the end, the answer just doesnt seem to match. All the time i get a 0 and the answer is “ln(a)“.

## client server – Question-Answer Game with coin rewards, how to validate answer?

I’m making a game that is basically a typical question-answer type. With coins as reward per answered question.

Stacks and requirements:

1. UI frameworks of iOS and Android, no game engine.

2. REST API.

3. No offline feature.

I’m a newbie in making games, and so my questions are:

1. Is it okay if I let the client side validate the answer and let it decide how much coins to add in the database?

2. Or do I make the client side ask for validation of the answer in the server? But I’m thinking this is not ideal because in the game, when you choose an answer, and if the answer is right, the congratulations screen must be instantly popped out.

## CSOM Caml Query Gets Only Root Level Folders in Document Library (Paged Answer)

I can easily get rootFolders from my test library in SharePoint Online using

``````FolderCollection oFolderCollection = targetList.RootFolder.Folders;
clientContext.ExecuteQuery();
``````

The problem is that this doesn't work for my production library as it exceeds the display list threshold. I was able to overcome the threshold error with the following code:

``````using (ClientContext clientContext = new ClientContext(webUri))
{

int rowLimit = 1;
var camlQuery = new CamlQuery();
camlQuery.ViewXml = @"" + rowLimit + "";

ListItemCollectionPosition position = null;

// This value is NOT List internal name
List targetList = clientContext.Web.Lists.GetByTitle("Clients");

//1st attempt
//FolderCollection oFolderCollection = targetList.RootFolder.Folders;
//clientContext.ExecuteQuery();

int clientCount = 0;
do
{
ListItemCollection listItems = null;
camlQuery.ListItemCollectionPosition = position;
listItems = targetList.GetItems(camlQuery);
clientContext.ExecuteQuery();
position = listItems.ListItemCollectionPosition;

foreach (ListItem item in listItems)
{
if (item != null && item.Folder != null)
{
try
{
clientContext.ExecuteQuery();
if ((item != null && item.Folder != null) && item.Folder.GetType() == typeof(Folder))
{
clientCount++;

//currently getting all files and folders, just want to start at top level folder then load in others if/as needed
Console.WriteLine(clientCount + "- Client: " + item.Folder.Name);
//clientFileDiver(item.Folder, clientContext);

}
}
catch (Microsoft.SharePoint.Client.ServerException)
{
if(item == null || item.Folder == null)
{
continue;
}
throw;
}
catch (ServerObjectNullReferenceException)
{
if (item == null || item.Folder == null)
{
continue;
}
}
catch (Exception)
{
throw;
}

}

}

}
while (position != null);
``````

Try ignoring some of the ugly if statements: D. I am very new to CSOM and caml. Is it possible to write a Caml statement that returns the folders in the root of the document library only? Or is there a way to alter the original way I was trying and retrieve the folders in a paged response via the targetList.Rootfolder properties? Sorry if this is not clear! Happy friday to everyone!

## Numerical integration: I can't find options that get NIntegrate[] to give an accurate answer

I'm trying to numerically evaluate an integral (specifically an integral of a function of an integral), and can't find a set of options for `NIntegrate` which results in correct answers. I have tried several (detailed) things below, and would be very grateful to anyone who can help me with this problem.

I specifically want to evaluate, for $$p = 1.2$$
$$m_p (b, c) = int _ {- infty} ^ infty dx_1 int_0 ^ infty dx_2 f_1 (x_1) f_2 (x_2) left ( log left (g left ( frac {c mathrm {e} ^ {b x_1}} {x_2 ^ 2}; b, c right) right) right) ^ p$$
where
begin {aligned} g left (y; b, c right) & = int _ {- infty} ^ infty dx_1 int_0 ^ infty dx_2f_1 (x_1) f_2 (x_2) h left (y, frac {c mathrm {e} ^ {b x_1}} {x_2 ^ 2} right) \ h (y_1, y_2) & = frac {1} { sqrt {2}} left ( sqrt {1- frac {1} { sqrt {(1 + 4 y_1) (1 + 4 y_2)} }} + sqrt {1 + frac {1} { sqrt {(1 + 4 y_1) (1 + 4 y_2)}}} right) \ f_1 (x) & = tfrac12 f_2 (x) = frac {1} { sqrt {2 pi}} mathrm {e} ^ {- x ^ 2/2} end {aligned}
the code for this is provided below

``````f1(x_) := 1/Sqrt(2 (Pi)) Exp(-x^2/2);
f2(x_) := 2 f1(x);
h(y1_, y2_) :=
1/Sqrt(2) (Sqrt(1 - 1/(Sqrt(1 + 4 y1) Sqrt(1 + 4 y2))) + Sqrt(
1 + 1/(Sqrt(1 + 4 y1) Sqrt(1 + 4 y2))));
g(y_, b_, c_) :=
NIntegrate(
f1(x1) f2(x2) h(y, (c Exp(x1 b))/
x2^2), {x1, -(Infinity), (Infinity)}, {x2, 0, (Infinity)})
m1(b_, c_) :=
NIntegrate(
f1(x1) f2(x2) Log(
g((c Exp(x1 b))/x2^2, b,
c)), {x1, -(Infinity), (Infinity)}, {x2, 0, (Infinity)})
m2(b_, c_) :=
NIntegrate(
f1(x1) f2(
x2) (Log(
g((c Exp(x1 b))/x2^2, b,
c)))^2, {x1, -(Infinity), (Infinity)}, {x2, 0, (Infinity)})

``````

Consider for example the arguments

``````b=0.05;
c=0.05;
``````

From the structure of the integral we know that \$ m_2 (b, c) – m_1 (b, c) ^ 2 \$ is a variance, so it is strictly non-negative. Also, since the integrand is not constant and the distributions \$ f_1, f_2 \$ did not reach their maximum point, we know that it is positive. However, the evaluation of the functions always returns

``````In():= m2(b,c) - m1(b,c)^2

Out()= -1.64707*10^-9
``````

However, by other means (details in the appendix below) we know that
$$m_2 (b, c) – m_1 (b, c) ^ 2> 0.014$$

I have reviewed the suggestions on this site, but nothing I found worked. Things I have tried:

• Different combinations of `_?NumericQ` flags in arguments, including all arguments, the `y, y1, y2` arguments only, and no arguments.
• Force recursions using `MinRecursion -> 1` and `MinRecursion -> 2`
• Method change to `Method -> {"GlobalAdaptive", Method -> "ClenshawCurtisRule"}`
• Change of variables in the integrals to $$x = sqrt {2} ( mathrm {Erf} ^ {- 1} (2 t-1)$$ who takes the integration domain of $$(- infty, infty) times (0, infty)$$ to $$(0,1) times ( tfrac12,1)$$.

### Appendix

In evidence from the above statements, we can demonstrate that
begin {aligned} m_p (b, c) & = int_0 ^ infty dy f_3 (y) , left ( log left (g left (y; b, c right) right) right) ^ p \ f_3 (y) & = int _ {- infty} ^ { infty} d x_1 f_1 (x_1) f_2 left ( sqrt { frac {c} {y}} mathrm {e} ^ {b x_1 / 2} right) frac12 sqrt { frac {c} {y ^ 3}} mathrm {e} ^ {b x_1 / 2} end {aligned}
the code for $$f_3$$ is

``````f3(y_, b_, c_) :=
NIntegrate(
f2(Sqrt(c/y) Exp((x1 b)/2)) f1(x1) 1/2 Sqrt(c/y^3)
Exp((x1 b)/2), {x1, -(Infinity), (Infinity)})
``````

which can be evaluated through a list and plotted by producing the functions plotted below (same frame, only a different frame range) from which it is clear that $$log left (g left (y; b, c right) right)$$ varies appreciably in the range of values ​​for which the distribution $$f_3 (y)$$ has significant support, and hence $$m_2 (b, c) – m_1 (b, c) ^ 2$$ is positive. Listing the integration of these plots we can say
$$m_2 (b, c) – m_1 (b, c) ^ 2> 0.014$$

Could someone help me answer this question?

When search engines were first started, the returned results were strictly based on the keywords used in the search query. This allowed search engines to view a wide range of topics depending on the query used. Search engines now narrow the scope of returned results based on how they sense the search. Do you think this trend is useful or harmful and why?

Suppose I have a site with more than 100,000 URLs on different sitemaps submitted to Google. If the priority is set to 1 for all URLs and Google is known to crawl for a limited time, will it scan the same URLs over and over again?

If I change the priority to 0.1 – 0.5, will it crawl different URLs every time?

If I set changefreq to monthly, will it crawl the same URL again only after a month? Between the month, will you search the remaining pages?

I want Google to scan all / most of the urls provided within the sitemap at least once instead of crawling the same urls over and over again. Is there a trick?

## asymptotic: why does my asymptomatic time complexity test of a dynamic matrix using the accounting method get an incorrect answer?

I had trouble formatting the sum symbols, so if anyone knows how to do it correctly feel free to edit.

I just read the CLRS asymptomatic analysis chapter. While the aggregation and potential method are clear to me, I thought you should do a little more practice with the accounting method. My goal was to demonstrate that if it has an array size 1, and when it fills up, it doubles the size, resulting in a time complexity of O (1) per operation. Using the aggregated analysis, here is the test that occurred to me:

If you call append n times, the time complexity would be as follows:

$$sum_ {i = 1} ^ { log n} frac {1} {2 ^ i}$$

which is less than

$$sum_ {i = 1} ^ {+ infty} frac {1} {2 ^ i}$$

which is equal to $$2n$$. So the time complexity for all N operations will be O (N), so the time complexity for a single operation is O (1).

However, if the size starts at 1, then it is added by 1 instead of multiplying by 2 (i.e., size starts at 1, then becomes 2, then becomes 3, etc.), then complexity The time must be O (N), which I set using the aggregate analysis.
$$sum_ {i = 1} ^ {n} (i + 1)$$

the extra 1 comes from actually placing the next item. This equals about (n (n + 2)) / 2, or O (N ^ 2) for all N operations, and O (N) for a single operation, which is correct. However, using the accounting method I get another answer. The actual cost to place an item is 1, and the asymptomatic cost I set is 2. So, the first time it costs 2, and the second call, the copy doesn't take longer because I prepaid it earlier, and the additional cost of putting that element is 2. Then it becomes:

$$sum_ {i = 1} ^ {n} 2$$

which is equal to O (N) for all total operations, and O (1) for a single operation. This is not the correct output. Where did I go wrong and how can I fix it?