## rest – What approach is best suited for food ordering?

The scenario is to have a website/server where users can order food. Then this order should somehow be inserted to the restaurant’s local database.

My customer doesn’t want a direct connection between the server and the restaurant’s local db.

So, how am I going to handle the orders? I think there are 3 options here

PS: I think this is a language-agnostic question. But in case it matters, the technologies I’m using are Kotlin, Java, Spring Boot, Hibernate, MySQL

## react.js – React state management design pattern new approach thoughts

Hey people on the internet!

I do not know where to share my new npm package so I am asking you do you know of a good place to share my package?

Also I would really appreciate if you take a look on it and give me your honest opinion on it react-spear

In short the goal of this package is to make state management more appealing then redux/MobX.
It uses an object of store that contains Subjects that can subscribe to global value change and synchronize with local state.

I have made an effort to contribute to the developers community with this package and I hope it is a good one.

If you have a counter like this:

src/Counter.(jsx/tsx)

``````import React from 'react';
import { Down } from './Down';
import { Up } from './Up';
import { Display } from './Display';

export const Counter = () => {
return (
<div>
<Up />
<Down />
<Display />
</div>
);
};
``````

That takes an Up and Down buttons compnent that needs to share their state.
You will have a store like this:

src/store.(js/ts)

``````import { Subject } from 'react-spear';

export const Store = {
counter: new Subject(0),
// You can add Some other values here to the store,
// and even an entire Store object to be nested here, whatever you want.
};
``````

The Down and Up components should look like this:

src/Up.(jsx/tsx)

``````import React from 'react';
import { useSensable } from 'react-spear';
import { Store } from './store';

export const Up = () => {
const count = useSensable(Store.counter);
return <button onClick={() => Store.counter.broadcast(count + 1)}>Up {count}</button>;
};
``````

src/Down.(jsx/tsx)

``````import React from 'react';
import { useSensable } from 'react-spear';
import { Store } from './store';

export const Down = () => {
const count = useSensable(Store.counter);
return <button onClick={() => Store.counter.broadcast(count - 1)}>Down {count}</button>;
};
``````

And the Display component will look loke this:

src/Display.(jsx/tsx)

``````import React from 'react';
import { useSensable } from 'react-spear';
import { Store } from './store';

export const Display = () => {
const count = useSensable(Store.counter);
return (
<div>
<div>Count is {count}</div>
</div>
);
};
``````

## Explanation

• When creating the store you are creating a subscribable object that can listen to changes.

• When using the broadcast method(you can help me think other names if you don’t like this one),
you emit the next value of the state, actualy like setState but globally.

• Then with the useSensable hook you are sensing the changes from the store(listening to broadcast event of that specific subject).
Inside it uses useState to manage the update of the incoming new value,
and useEffect to manage subscription to that value.

• So everytime you broadcast a new value every component that consume that value via useSensable
are getting rerendered with the new value.

Hope it make sense because it does to me.

## algorithms – Approach / data structure to wait for N tasks to complete in arbitrary order

My system needs to wait for the completion of N tasks and terminate when all are completed. The work items are passed in as an immutable array and the execution of these items are handled externally to my system. Checking for completion is quite inexpensive and the items can complete in any order. The size of these lists are relatively small — in the hundreds or thousands of entries, so memory size is not an issue. Given the constraints of the system, the task list passed in is an immutable array so anything I’d want to do with that would need to augment or copy it. In case it’s relevant, I’m using C++17.

A simple (naive?) approach to this would be to create a boolean array of completion status (initialized to false) and mark the corresponding index in that array as true once the work item has been completed. Then, iterate through the task list again and again until all items are completed.

Of course, with this approach, towards the end of the run there will be a lot of checking of the completed list with little actual work to do (and on average, the algorithm will go through half of the entries that have already been done).

Another approach would be to create a singly-linked list of array indicies (initialized to the elements (0, N)) and remove elements as work items are completed. That avoids the “wasted checking” approach of the parallel completed array approach mentioned above; it would just jump directly to the indices in the task list that are not known to already be completed.

It seems that there may be a faster / elegant approach to this, though, or a data structure that naturally handles this.

## induction – Is Inductive Logic Programming approach applicable to general theories (not just sets of Horn clauses)?

Inductive Logic Programming (https://en.wikipedia.org/wiki/Inductive_logic_programming) find hypothesis theory H for background theory B and set of examples E. ILP algorithms and implementations usually expects, that H, B, E are logic programs – set of Horn clauses and not general FOL or HOL theories. This approach is generalized to the HOL logic programs as well, e.g. in http://andrewcropper.com/pubs/ijcai16-metafunc.pdf.

My question is – are there efforts to formulate induction/ILP for general theories. Apparently, the algorithm does exist and the problem is undecidable, but still – are there some heuristics, some approximate approaches, some more or less rigorous work for such generalization? Both – for full FOL and HOL?

E.g. references wiki articles mentions the method of inverse entailment – I see that that approach is general enough – it requires the computation of the most conscise (e.g. Occams razor principle – with the minimal Kolomogorov or other complexity) set of consequences in some depth.

Actually – ILP may be the Holy Grail of AI: 1) it can learn general policies from the specific policies and hence – implement generalization and transfer learning, e.g. in reinforcement learning; 2) it can learn the program which from background knowledge computes the set of input-output patterns (in more or less general form) and hence – solve the program synthesis task.

## nt.number theory – A geometric approach to the odd perfect number problem?

Let $$e_d$$ be the $$d$$-th standard-basis vector in the Hilbert space $$H=l_2(mathbb{N})$$.
Let $$h(n) = J_2(n)$$ be the second Jordan totient function.
Define:

$$phi(n) = frac{1}{n} sum_{d|n}sqrt{h(d)} e_d$$.

Then we have:

$$left < phi(a),phi(b) right > = frac{gcd(a,b)^2}{ab}=:k(a,b)$$

The vectors $$phi(a_i)$$ are linearly independent for each finite set $$a_1,cdots,a_n$$ of natural numbers, since

$$det(G_n) = prod_{i=1}^n frac{h(a_i)}{a_i^2}$$
is not zero, where $$G_n$$ denotes the Gram matrix.

Define:

$$hat{phi}(n) := sum_{d|n} phi(d) = frac{1}{n} sum_{d|n} sigma(frac{n}{d})sqrt{h(d)} e_d$$

Then we have:

$$n$$ is an odd perfect numbers, if and only if:

$$left < hat{phi}(n),phi(2) right > = 1$$

By the triangle inequality we have:

$$|hat{phi}(n)| le tau(n)$$

where $$tau$$ counts the number of divisors of $$n$$.

Geometric intuition:
Since the vectors $$phi(d), d|n$$ are almost orthogonal and have norm $$1$$, we should have by Pythagoras:

$$|hat{phi}(n)|^2 approx sum_{d|n} |phi(d)|^2 = tau(n)$$

A more concrete claim, which I have not been able to prove yet is:
$$|hat{phi}(n)|^2 ge tau(n)$$
for all $$n$$?

Let $$alpha$$ be the angle between $$phi(2)$$ and $$hat{phi}(n)$$, where $$n$$ is an OPN.
Then, by Jordans inequality for the $$sin$$-e we get after some algebraic manipulation (and using the last claim), the following upper and lower bound for $$tau(n)$$ for the OPN $$n$$:

$$frac{1}{sqrt{1-frac{4alpha^2}{pi^2}}} le tau(n) le frac{1}{1-alpha^2}$$

However it seems that numerical experiments suggest, that the last inequality can hold only for $$n=1$$ or $$n=$$ a prime, which would contradict the OPN property.

My question is, if one can prove the bold claim.

## elementary number theory – is this proof and approach correct? (n is a power of 2) ¬(n has odd divisors other than the trivial +-1)

It is a bicondition and thus I have to prove the both directions. I want to use a direct proof and a proof by contraposition. The direct – in short – is just that The prime factorization of n=2^x is 2^x and is unique so no other divisors that are not factored – only by power of 2.

The contraposition is

Assume the opposite, that n = 2^m (m a positive integer) is divisible by the odd number 2D + 1, where D is a positive integer. That is, 2^m = (2D + 1)(Q), where Q is the positive integer quotient.

Since the left side is an even number, Q must be an even number too because the product of two odd numbers is odd. So Q = 2R, for a positive integer R. Therefore,

2^m = (2D + 1)(2R).

dividing both sides by 2 yields

2^(m-1) = (2D + 1)(R)

Repeat this process until either the power of 2 on the left side becomes 1, or the quotient on the right side becomes 1.

But then the left side will be even but right side will be odd. A contradiction. Therefore the original statement must be true.

Is this proof and approach correct? Any feedback is much appreciated.

## design – How to approach software development when risk/consequences are high?

While it can be frustrating when your desktop freezes, or your video game drops FPS, or your email client crashes, it’s typically not too consequential. The user will likely live to see another day.

But there are cases when software is being used in an environment where the risks/consequences are high and bugs and poor design can affect its users’ wellbeing. For example:

• Encrypted messaging services where bugs may expose user identities (1)
• Software/firmware on a rocket ship. What if the ship landed erroneously, some food supply is lost, and communications to Earth are down because of a buffer overflow in the messaging application?
• Software/firmware in self-driving cars (2)
• Software/firmware in nuclear power plants
• Software/firmware at a missile launch site. What if the “Open The Door” button accidentally launches a missile because it’s 2038 (3) and, just for kicks, Brazil also decided not to observe DST again (4). (I’m aware that this one seems a bit far-fetched but the accuracy of my examples is not the point here.)

In such cases, risk and consequences are quite high and yet, we’ve managed to successfully perform all the above to some extent.

Perhaps this is not a great question and is just a variation of me asking how to reduce bugs in software, but I’m having trouble believing that the answer is to write good software, test well, and hope for the best.

My question: are there guidelines, procedures, best practices, mental models, etc. for software development when risk and consequences are very high? Are there books on this topic specifically? I’d also be glad to see anecdotal answers if you’ve had hands-on experience in developing such software or working in such an environment.

(1) https://www.forbes.com/sites/zakdoffman/2019/08/25/chinese-agencies-crack-telegram-a-timely-warning-for-end-to-end-encryption/#39f74ca56342

(4) https://en.wikipedia.org/wiki/Year_2038_problem

## c# – Best approach to connect master and child table while insert data in tables

I need advice what is best approach to connect data between master and child table.

I have console application written in C# which scrap data from web, process data and insert them in table.

Master table with fields Id, OrdinalNumber, StringDate and child table with fields Id, OrdinalNumber, StringDate, Name, Amount, StartDate, EndDate.

Because I scrap data from site I don’t have some unique identifier so I’m looking on Ordinal number (number from site) and StringDate(created like date but in string format `ddMMyyyy` I scrap site ever day once daily).

While I insert data in table it is done like this:

``````INSERT INTO MasterTable(OrdinalNumber, StringDate) VALUES (@OrdinalNumber, @StringDate)
``````

and after that I insert data in child table:

``````INSERT INTO ChildTable(OrdinalNumber, StringDate, Name, Amount, StartDate, EndDate)
VALUES(@OrdinalNumber, @StringDate, @Name, @Amount, @StartDate, @EndDate)
``````

Is this okay approach or better approach would be to get Id from master table when I insert data call SELECT SCOPE_IDENTITY() to get last inserted Id and put them in child table with this query:

``````INSERT INTO ChildTable(MasterId, Name, Amount, StartDate, EndDate)
VALUES(@MasterId, @Name, @Amount, @StartDate, @EndDate)
``````

My question is it better to stick with first approach or use SCOPE_IDENTITY() to get last ID and populate child data with it and save in database? I will insert approximately 10000 rows in Master table and 20000+ rows in child table.

I’m using dapper for inserting data in tables.

1.

``````IDbTransaction transaction = null;

try
{
using (IDbConnection connection = new SqlConnection(DbConnectionString))
{

if (connection.State != ConnectionState.Open)
connection.Open();

transaction = connection.BeginTransaction();

string sql = "INSERT INTO Master(OrdinalNumber, StringDate)" +
"VALUES(@OrdinalNumber, @StringDate)";

connection.Execute(sql, master, transaction);

string sql = "INSERT INTO Child(OrdinalNumber, StringDate, Name, Amount, StartDate, EndDate)" +
"VALUES(@OrdinalNumber, @StringDate, @Name, @Amount, @StartDate, @EndDate)";

connection.Execute(sql, child, transaction);

transaction.Commit();

}
}
catch (Exception ex)
{
if (transaction != null)
{
transaction.Rollback();
}
throw ex;
}
finally
{
if (transaction != null)
transaction.Dispose();
}
``````
1. IDbTransaction transaction = null;

try
{
using (IDbConnection connection = new SqlConnection(DbConnectionString))
{

``````      if (connection.State != ConnectionState.Open)
connection.Open();

transaction = connection.BeginTransaction();

foreach(var m in master)
{
string sql = "INSERT INTO Master(OrdinalNumber, StringDate)" +
"VALUES(@OrdinalNumber, @StringDate)" +
"SELECT SCOPE_IDENTITY()";

var id = connection.ExecuteScalar<int>(sql, m, transaction);

foreach(var c in child.Where(d => d.OrdinalNumber == m.OrdinalNumber))
{
string sql = "INSERT INTO Child(MasterId, Name, Amount, StartDate, EndDate)" +
"VALUES(@MasterId, @Name, @Amount, @StartDate, @EndDate)";

connection.Execute(sql, new {MasterId = id, c.Name, c.Amount, c.StartDate, c.EndDate}, transaction);
}
}

transaction.Commit();

}
}
catch (Exception ex)
{
if (transaction != null)
{
transaction.Rollback();
}
throw ex;
}
finally
{
if (transaction != null)
transaction.Dispose();
}
``````

## python – How to Refactor multiple elif statements, i dont know how to approach it im a beginner

(The “anykey” has not been used)

``````def load1():
print(
"Please choose your regionn 1 for Region 1n 2 for CAR n 3 for Region IIn 4 for Region IIIn 5 for Region IVn 6 for NCRn 7 for Region Vn 8 for Region VIn 9 for Region VIIn 10 for SOCCSKARGENn 11 for Region VIIIn 12 for CARAGAn 13 for Region IXn 14 for Region Xn 15 for Region XI ")
option = int(input("Your option: "))
# Acts like switch
if option == "1":
print("Region 1n")
csv_file = csv.reader(open('file/Region I.csv', 'r'))
for row in csv_file:
print(row)
anykey = input("Press any key to return from main menu")

elif option == "2":
print("CARn")
csv_file = csv.reader(open('file/CAR.csv', 'r'))
for row in csv_file:
print(row)
anykey = input("Press any key to return from main menu")

elif option == "3":
print("Region 2n")
csv_file = csv.reader(open('file/Region II.csv', 'r'))
for row in csv_file:
print(row)
anykey = input("Press any key to return from main menu")

elif option == 4:

print("Region 3n")
csv_file = csv.reader(open('file/Region III.csv', 'r'))
for row in csv_file:
print(row)
anykey = input("Press any key to return from main menu")

elif option == 5:
print("Region 4n")
csv_file = csv.reader(open('file/Region IV.csv', 'r'))
for row in csv_file:
print(row)
anykey = input("Press any key to return from main menu")

elif option == 6:
print("NCR 4n")
csv_file = csv.reader(open('file/NCR.csv', 'r'))
for row in csv_file:
print(row)
print()
anykey = input("Press any key to return from main menu")

elif option == 7:
print("Region 5n")
csv_file = csv.reader(open('file/Region V.csv', 'r'))
for row in csv_file:
print(row)
anykey = input("Press any key to return from main menu")

elif option == 8:
print("Region 6n")
csv_file = csv.reader(open('file/Region VI.csv', 'r'))
for row in csv_file:
print(row)
anykey = input("Press any key to return from main menu")

elif option == 9:
print("Region 7n")
csv_file = csv.reader(open('file/Region VII.csv', 'r'))
for row in csv_file:
print(row)
anykey = input("Press any key to return from main menu")

elif option == 10:
print("SOCCSKARGENn")
csv_file = csv.reader(open('file/SOCCSKARGEN.csv', 'r'))
for row in csv_file:
print(row)
anykey = input("Press any key to return from main menu")

elif option == 11:
print("Region 8n")
csv_file = csv.reader(open('file/Region VIII.csv', 'r'))
for row in csv_file:
print(row)
anykey = input("Press any key to return from main menu")

elif option == 12:
print("CARAGAn")
csv_file = csv.reader(open('file/CARAGA.csv', 'r'))
for row in csv_file:
print(row)
anykey = input("Press any key to return from main menu")

elif option == 13:
print("Region 9n")
csv_file = csv.reader(open('file/Region IX.csv', 'r'))
for row in csv_file:
print(row)
anykey = input("Press any key to return from main menu")

elif option == 14:
print("Region 10n")
csv_file = csv.reader(open('file/Region X.csv', 'r'))
for row in csv_file:
print(row)
anykey = input("Press any key to return from main menu")

elif option == 15:
print("Region 11n")
csv_file = csv.reader(open('file/Region XI.csv', 'r'))
for row in csv_file:
print(row)
anykey = input("Press any key to return from main menu")
``````

## python – Approach for Querying Relational Data

Consider the following models (pseudo-code)

``````Place:
type: const = "Place"
id: str
name: str
lat: float
lon: float
# other fields

Event:
type: const = "Event"
id: str
name: str
start_date: timestamp
location: Place
# other fields
``````

I am using Python for the BE and CouchDB as DB System.

When creating a new object, I match my data in Python but I use CouchDB queries to get a list of prospective matches, e.g.

``````// Query1
{
"selector": {
"type": "Event",
"start_date": {
"\$gt": new_event.start_date - epsilon,
"\$lt": new_event.start_date + epsilon
},
"place.lat": {
"\$gt": new_event.place.lat - epsilon,
"\$lt": new_event.place.lat + epsilon
},
"place.lon": {
"\$gt": new_event.place.lon - epsilon,
"\$lt": new_event.place.lon + epsilon
}
}
}
``````

where `epsilon` is some constant.

The current approach requires a non-relational approach as I am using `Event.place.lat` in the query. To avoid duplication I would prefer a relational approach, i.e. in the DB `Event.place` is an ID as opposed to an actual object. However, with a relational approach I can no longer use `Event.place.lat` in my queries.

## Solution 1: Multiple Queries

To emulate the same query as `Query1` I would:

1. Query using only `new_event.type;new_event.start_date` to obtain `results_1 = (result_1_1, ..., result_1_n)`
2. Generate a list of IDs `relation_ids = (r.place for r in results_1)`
3. Query for the relations using `new_event.place.lat;new_event.place.lon;relation_ids` as filter to obtain `results_2 = (result_2_1, ..., result_2_k)`
4. Merge `results_1;results_2` to a list of `Event` objects (Discarding the `results_1` elements w/o corresponding `results_2` element).

Additional Cost: 1 additional DB query and a little bit of computations in the BE (steps 2 & 4).

## Soltuion 2: Expanding Models

This approach is straightforward: Add the fields `lat;lon` the `Event` objects saved in the DB. This way `Query1` can be executed almost identically (using `new_event.lat;new_event.lon`).

Additional Cost: Data duplication (`lat;lon` being in both `Event` & `Place`).

Which would be the recommended implementation? Are there any other approaches that I might be missing (w/o leaving CouchDB)?