design patterns: should a function generate exceptions based on its expected behavior or its objective state?

My co-worker and I are discussing the correct design for an API. Let's say we have a function void deleteBlogPost(int postId). What should this function do if the blog post is indexed with postId does not exist?

I think it would be appropriate to throw an exception, because the function should be designed to do one thing. When the user calls a function called deleteBlogPost, always await publication with ID postId to be deleted To try to delete a post with an invalid postId It makes no sense, so an exception must be thrown.

My colleague argues that the caller does not really intend to Remove a specific publication, just to ensure that after the call, the publication does not exist. If you call deleteBlogPost With a non-existent publication ID, the goal status has already been reached, so nothing should happen. He also noted that this design guarantees calls to deleteBlogPost They are idempotent, but I am not convinced that this is a good thing.

We find examples of both patterns in several APIs. For example, compare deleting a dictionary / map entry with a key that does not exist between Python and Java:


my_dict = {}
del my_dict('test')   # KeyError: 'test'


Map map = new HashMap<>();
map.remove("test");   // no exception thrown

Should a function throw exceptions based on its expected behavior or its objective state?

pr.probability – Expected value of the absolute value of the displaced binomial distribution

Recently my research needs to calculate $ mathsf {E} (| X- frac {n} {2} |) $ where $ X $ follow the binomial distribution with parameter $ (n, p) $. When $ p = frac {1} {2} $, this is only the absolute mean deviation (MAD) and is closed, see this document for more details. But when $ p neq frac {1} {2} $ I can not calculate directly. I have an idea that we can try to calculate $ lim_ {t rightarrow 2} mathsf {E} ((X- frac {n} {2}) ^ frac {2} {t}) $, but I'm not familiar with the fractional moment either. Any reference or idea would be appreciated.

Thanks in advance.

integration: where does this approximation for the expected value of an exponential come from?

Recently I have seen the following approach in several documents, but I cannot understand where it comes from. The approximation is as follows

Leave $ W_t $ be a standard brownian movement and $ f $ Some smooth and positive function. Then we can make the following approximation $$ mathbb {E} left ( exp left ( int ^ t_0 frac {1} {f (W_u)} du right) Big | W_t = z right) = exp left ( frac {1} {2} left ( frac {1} {f (0)} + frac {1} {f (z)} right) t right) + mathcal {O} (t ^ 2 ) $$ where $ mathcal {O} (t ^ 2) $ It is a term that groups all the terms of order $ t ^ 2 $ or taller

I assumed that this should be a kind of Taylor series approach, but w.r.t. The expression on the right side above.

Another thought I had was that the approximation is very similar to a function that generates moments, more specifically to that of a normal distribution, but this did not lead to anything.

Any advice is appreciated!

Numenera: health superior to that expected in NPC

I am trying to understand the health inflation commonly printed on the Cypher System material (often in the module adventures, or in the small sidebars when describing the NPC configuration)

In Numenera Health (HP) it is usually determined by the standard target number

Numenera – Discovery, p 222 (Also the same in the 1st edition)

Health: The target number of a creature is also usually its health, which is the amount of damage it can suffer before it is dead or incapacitated. For easy reference, entries always list the health of a creature, even when it is the normal amount for a creature of its level.

Which is 3 times the difficulty level, just for reference.

Designers avoid the warning that sometimes monsters simply break the usual defined health often by a much larger number. I remember somewhere in 1st Ed Numenera that refers to doing this to provide more challenging combats to higher level characters.

Brief description of Discovery / Destiny I have taken some examples:

  • Discovery p 367 – Teratoma – Level: 3 HP: 12
  • Discovery p 381 – Octopus- Level: 3 HP: 15
  • Discovery p 369 – Teratoma (M) – Level: 4 HP: 15
  • Destiny p 371 – Assassin – Level: 4 HP: 20
  • Discovery p 375 – Weymel – Level: 5 HP: 20
  • Discovery p 385 – Latos – Level: 5 HP: 25
  • Destiny p 389 – Halcus – Level: 5 HP: 20
  • Destiny p 389 – Drayva – Level: 5 HP: 20
  • Destiny p 362 – Khagun Semper – Level: 5 HP: 26
  • Destiny p 373 – Soludi – Level: 6 HP: 24
  • Destiny p 398 – Heri – Level: 6 HP: 27
  • Destiny p 398 – Scrose – Level: 7 HP: 30

There are many, many more examples distributed by Cypher Systems, OG-Numenera, Discovery, Destiny, The Strange and Predation. And they are not exceptional or freely used, HP inflation is extremely common. As you can see in this small list, creatures range from the encounter with the Chief to humbly random animals without any rhyme or reason that he can perceive. In all level ranges.

My question is why is there a systematic process to do this? Is the standard HP suggested in the Creature section too low? I am looking for designer notes, or even GM's personal experience to help assess what is the appropriate amount of HP that one should allocate to combatants.

dynamic programming: expected chain length

Consider a string of length N that contains up to K different characters. The compression algorithm works as follows: Replace each maximum contiguous substring that contains only a different character (repeated an arbitrary number of times) and replace it with 2 values: the character and the length of the substring. the length of any integer is considered 1. For example, if a string is compressed to "a, 111, b, 13", its length after compression is considered 4.

Expected length of the compressed chain for N and K given if the input chain is randomly chosen uniformly among all possibilities.
The source is
Can someone help me please?

I am a newbie if you can give me some resources, it will be useful

Sharepoint online: json list design does not work as expected

I tried using the json code below to make a list as a mosaic.

It works as expected when it opens in the list, however, the design changes when this list is used as a web part on the page.

Figure 1: Open in the list

enter the description of the image here

Figure 2: List used as a web part

enter the description of the image here

Is there any way to make the design of Figure 2 the same as that of Figure 1?

I appreciate your great help!

Json Code:

     "hideSelection": "true", 
     "hideColumnHeader": "true", 
     { "elmType": "a", 
     "href": "($URL)", 
    "target": "=if(($OpenInNewTab) == true, '_blank', '')" 
    "style": { "float": "left" }, 
    "children": ( { 
    "elmType": "div", 
    "class": "ms-bgColor-themeLighterAlt ms-bgColor-themePrimary--hover ms-fontColor-white--hover" 
    "display": "flex", 
    "flex-wrap": "wrap", 
    "min-width": "80px", 
    "min-height": "50px", 
    "margin-right": "10px", 
    "box-shadow": "2px 2px 4px darkgrey" 
    "elmType": "div", 
    "padding": "0 25%", 
    "elmType": "img", 
    "vertical-align": "top" 
    "src": "($thumbnail)" 


co.combinatorics – Question of the expected number of consecutive coin tosses with increasing bias

This is a question I found in the book and I don't know how to address it. Thanks to any help or suggestion in advance.

I have a coin that, I could get 100% head on the first throw, $ frac {1} {3} $ in the second round, $ frac {1} {5} $ in the third turn …… $ frac {1} {2i – 1} $ in the $ i ^ $ turn around.

Suppose I get a point when I get 4 consecutive throws (4 faces or 4 tails). If I turn 100 times, what is the expected number of points I would get?

Probability – Expected detention time of a discrete process

Consider the stochastic process $ X: mathbb {N} rightarrow mathbb {N} $ defined as follows:

$$ left { begin {array} {ll}
X_1 = 0 \
X_ {n + 1} = X_n + mathbf {1} _ { {z_n leq P_n }} \
P_n = (X_n + n) / (M + n)
end {array}
right. $$

where $ left (z_n right) _ {n in mathbb {N}} $ It is a sequence of i.i.d. r.v. evenly distributed over $ left (0, 1 right) $ Y $ M in mathbb {N} ^ * $ it's constant

Define the stop time. $ tau = inf left {j> 0 , | , a_j = M right } $.
So I want to show that $ mathbb {E} left ( tau right) geq 2M $.

This problem arises from the analysis of the "Snow Plow" algorithm defined here by Paolo Ferragina, who unfortunately provides a faulty analysis.

c # – How do I make sure that the interface implementations are implemented in the way I expected?

Let's say there is a SomeMethod member in an ISomeInterface interface as follows:

public interface ISomeInterface
        int SomeMethod(string a);

For the purposes of my program, all ISomeInterface consumers act under the assumption that the returned int is greater than 5.

You can think of three ways to solve this:

1) For each object that consumes ISomeInterface, they claim that the int> 5 returned.

2) For each object that implements ISomeInterface, they claim that the int they are about to return is> 5.

Both of these solutions are cumbersome, since they require the developer to remember to do this in each implementation or consumption of ISomeInterface. In addition, this depends on the implementation of the interface that is not good.

3) The only way I can think of doing this is to practically have a container that also implements ISomeInterface and returns the underlying implementation as follows:

public class SomeWrapper : ISomeInterface
        private ISomeInterface obj;

        SomeWrapper(ISomeInterface obj)
            this.obj = obj;

        public int SomeMethod(string a)
            int ret = obj.SomeMethod("hello!");
            if (!(ret > 5))
                throw new Exception("ret < 5");
                return ret;

However, the problem now is that we again rely on an ISomeInterface implementation detail through what the SomeWrapper class does, although with the benefit that we have now limited it to a single location.

Is this the best way to ensure that an interface is implemented as expected, or is there a better alternative? I understand that the interfaces may not be designed for this, but then, what is the best practice to use an object under the assumption that it behaves in a certain way more than I can transmit within its member signatures of an interface without Need to make statements every hour is instantiated? An interface seems like a good concept, if you could only specify additional things or restrictions that you are supposed to implement.

The dtype buffer does not match, it was expected & # 39; SIZE_t & # 39; but it became & # 39; long long & # 39;

I can't copy and paste the error, so the screenshot has been loaded, check it out