ddos – Public API security: authentication vs. speed limitation, etc.

We are creating a SaaS product that allows companies to configure and organize the sale of a certain class of products / services. This product has an API in its core and an ecosystem of several applications around it. These include web applications (sites) that face the general public, a web-based CMS and an iOS application for sales people. Our customers can use them or create their own applications to talk with our API.

There has been a long debate among us about how the API should be secured. It has authentication (key / secret API for applications and user name / password for users) and authorization based on roles / permissions. At the moment, you can not get a useful API response (apart from its version) unless the requestor authenticates. This includes the endpoints that return data that is available to the general public, such as the list of items for sale.

The moot point is whether the API should require authentication for what are essentially public data.

The arguments for authentication:

  • We can not simply keep the API open for whoever calls it, even if they can get the information anyway by exploiting public web applications. An open API can be subject to abuse or attacked cargo. It is better to control who has access when issuing keys / secrets by client application, which will be the first line of defense.

The arguments against authentication:

  • It does not make sense to restrict access to what is publicly available anyway (through public access web applications that have a key / secret with the "public" role incorporated in them);
  • Making the required authentication does not add value and only generates unnecessary overhead by requiring public applications to implement authentication clients, maintain keys / secrets and update authentication tokens. Applications should only authenticate against the API when a user needs to login (which some clients simply do not need since they use the guest payment process);
  • Any abuse problem (for example, exceeding the limit of the request rate, load attacks, etc.) must be addressed by the DDoS protection layer, API internally, or both. Authentication is not the appropriate protection, since a malicious client could obtain the credentials of the application and create problems for the API anyway, not to mention that the rate of authentication attempts should also be limited.

Is any of the two previous positions horribly incorrect or would you laugh if you were to appear in the API market? Is there a correct approach here, or are both approaches sensible in terms of security and could they be selected based on other considerations such as the convenience / ease of implementation?

sharepoint online – Content query limitation limit

In my SharePoint Online, I have a content query web element that I have configured to show 5 recent elements of a subsite and sorted by Modified but I do not get any results and the error message appears over the limit of queries limitation (probably because the Total items on the site is more than 5k). I am aware that the query will not work if the total number of items to consult is more than 5k, however, it works when I ask in a specific library with more than 5k items. Here is my case and the test cases:

Subsite 1:

  • Library 1 (4k articles)
  • Library 2 (2k articles)

Subsite 2:

Subsite 3:

Test cases

  1. Query Subsite 1 = No results
  2. Query Subsite 2 = No results
  3. Query Subsite 3 = Show results
  4. Query Library 1 = Show results
  5. Query Library 3 = Show results

I am confused now as to how the query can run against a library with 8k elements but I can not execute the query against a site with a total of more than 5k elements.

c – Limitation of frames in an SDL game

I'm a relatively newbie programmer and I'm taking steps in graphics programming. I'm writing a quick pong clone and I want to limit the FPS to save system resources. This is how I implemented it (I omitted another code for brevity)

#include 
#include 
#include "SDL.h"
#include "Windows.h"
#define WIN32_LEAN_AND_MEAN
# define SPF 30

Uint32 starttime, endtime, deltatime;

int main ()
{
// initialize SDL and do other things
// start the game loop
timeBeginPeriod (1);
While
{
starttime = GetTickCount ();
SDL_PollEvent (& event);
if (event.type == SDL_QUIT)
{
SDL_DestroyRenderer (renderer);
SDL_DestroyWindow (window);
SDL_Quit ();
appisrunning = false;
break;
}
// render things here
endtime = GetTickCount ();
if (endtime> starttime)
{
deltatime = endtime - starttime;
}
else // is responsible for wrapping
{
deltatime = ((2 ^ 32) - start time) + end time;
}
if (deltatime> (1000 / FPS)) {}
plus
{
Sleep ((1000 / FPS) - deltatime);
}
}
timeEndPeriod (1);
returns 0;
}

Use timeBeginPeriod (1) in an effort to give Sleep () a resolution of ~ 1 ms. According to the MSDN documentation, this function establishes the resolution of the timer throughout the system and can result in greater CPU usage, since the programmer has to change the tasks more frequently (if I understand it correctly).

Is this the right approach to this task? What comments do you have for me?

Limitation for the minimum output size? And a fixed fee per transaction for payments in Bitcoin?

In the following article of http://lightning.network/ titled: "The summary of the Bitcoin ray network", (Link to article) is mentioned:

"Lightning allows one to send funds to 0.00000001 bitcoin
No custody risk. The bitcoin blockchain currently enforce a
Minimum output size many hundreds of times higher (1)
, and a fixed
fee per transaction (2)
what makes micropayments not practical. Flash of lightning
Allows minimum payments denominated in bitcoins, using real bitcoins
proceedings."

(one) Is there in (standard) Bitcoin to Limitation of the minimum payment amount (output size)?

(two) There's a fixed rate per transaction for payments in bitcoin? And if so, how much is this fare amount?

Have you ever had serious problems with Paypal? (freezing, limitation) | Proxies123.com

I see some alarming stories lately online, from people who claim that Paypal retains the balance of their account for several reasons. I'm not talking about the "pending" status that PayPal requires. Now, for some of the cases I read, users violated the Terms of Service by misrepresenting themselves as adults when they were underage, providing a false identity, etc.
In a conversation on reddit, one person stated that Paypal "will use whatever excuse he finds to keep his money," but I'm not sure how biased that vision is. I guess that makes sense in some way, since it is a profit-oriented service, not a bank, and some banks do it all the time.

Other things that can cause a freeze or put a limit on the amount of time you can process per transaction, such as making purchases or sudden transfers with large sums of money. In some cases, people who launch their businesses may face this problem.

A series of chargebacks, bad reviews of other parts of transactions, strange IP logins, suspicions of profits for providing content, illicit products or services, or a bad credit score may be other reasons that attract Paypal attention to his account.

There are cases of people receiving a 6-month balance block in balances of thousands of dollars. While they rarely do so for periods of time> 6 months, that small period of time can be detrimental to people who pay their bills through Paypal, or that Paypal handles all of their profits and savings.

Have you ever had this kind of problems with Paypal?
I've never had problems with them, personally.

Set theory: what is the limit to iterate class understanding, reflection and size limitation?

When publishing on a principle of reflection together with a limitation of the size of the axiom on the theory of sets of Ackermann, the answer is that the theory rises to a cardinal of Mahlo.

I'm here wondering if this method can be iterated, and what is the most it can achieve through this iteration process.

For example, let's define a theory. $ mathsf {K} ^ {+} (V _ { lambda}) $ in the language of $ FOL (=, in, V_1, V_2, .., V _ { lambda}) $ While $ lambda $ is a specific recursive ordinal that has some specific ordinal notation, that is, whenever $ lambda < omega_1 ^ {CK} $

Now the idea is that every theory $ mathsf {K} ^ {+} (V _ { lambda}) $ has axioms of extensionality, class understanding axiom scheme for $ V _ { alpha} $, an axiom of reflection for $ V _ { alpha} $, and axiom size limitation for $ V _ { alpha} $, for each $ alpha < lambda $, we also have the axiom scheme:

Yes $ alpha < beta $, so: $ “ forall x (x subset V _ { alpha} to x in V { beta}) "$
it is an axiom

More specifically the formula of class understanding for $ V _ { alpha} $ is:

$$ for all x_1, .., x_n subseteq V _ { alpha} exists x forally (y in x leftrightarrow and in V { alpha wedge varphi (y, x_1,. ., x_n)) $$, where $ varphi (y, x_1, .., x_n) $ It is a formula that does not use primitives. $ V _ { beta} $ when $ beta> alpha $.

While the formula of the reflection scheme for $ V _ { alpha} $ it would be written as:

$$ forall x_1, .., x_n in V _ { alpha} \ [exists y (varphi(y,x_1,..,x_n)) to exists y in V_{alpha}(varphi(y,x_1,..,x_n)) ]$$ where $ varphi (y, x_1, .., x_n) $ does not use any primitive symbol $ V _ { beta} $ While $ beta geq alpha $.

Now, what is the limit of the strength of the force? $ mathsf {K} ^ {+} (V _ { lambda}) $ theories

lo.logic – What is the strength of adding size limitation and a simple version of reflection to Ackermann's set theory?

The following theory is formulated in logic of first-order predicates with extra-logical equality primitives $ “ = "$, membership $ “ en "$, and a single primitive constant symbol $ V $ denoting the class of all the sets.

The axioms are those of the first-order identity theory.

  1. Extensionality: $ forall x (x en a leftrightarrow x in b) a a = b $

  2. Axiom of class understanding: Yes $ varphi (y) $ It is a formula in which the symbol. $ “ and $ It happens free, then all the closures of: $ exists x forall y (y in x leftrightarrow and in V wedge varphi (y)) $ they are axioms

  3. Reflection: Yes $ varphi (y, x_1, .., x_n) $ It is a formula in $ FOL (=, in) $, in which only $ y, x_1, .., x_n $ they happen free, then:

$$ forall x_1, .., x_n in V \ [exists y (varphi(y,x_1,..,x_n)) to exists y in V (varphi(y,x_1,..,x_n))]$$

it is an axiom

  1. Super-transitive: $ x in V wedge and subset x to y in V $

/

This system would interpret the totality of Ackermann's set theory. [Harvey Friedman]! However, I find it more elegant than Ackermann's. However, my question here is that if we replace the last axiom with an axiom size limitation that states that $ V $ it is a class of all subsets of it that are strictly smaller in cardinality, that is, formally this is:

  1. Size limitation: $ forall x (x in V leftrightarrow x subset V wedge | x | <| V |) $

So how much would this increase the consistency strength of this theory?

I mean, this would increase the force beyond $ ZFC $ Y $ MK $, as $ V $ would be inaccessible, and this can be described in a logical formula of the first order, so that, by reflection, there would be a set in $ V $ that is inaccessible

Should I base the links created each day in the PRESENTED or VERIFIED limitation?

What's happening, GSA pplz? :)

When I'm doing a campaign, I see that I can not let the program run and get ridiculous peaks of 1000 links when the normal is only 10. : |

Fortunately, GSA did it so I can pause a project after X links.

You can PAUSE or STOP a project after XX SUMBISSIONS or VERIFIED links for YY minutes.

Click on project >> Options >>

It's right there at the top of the page. :)

But what is more important than count, S or V links?

I have focused more on verification, but the presentations can be verified and some types of links are not verified, right?

Also, does anyone know what PAUSE or STOP means? I guess you stop turning the project inactive?

If a project stops for, say, 1200 minutes, but I manually restart it or turn GSA off and restart, will it prevent the project from running until those 1200 minutes have elapsed?

Thank you…