pattern matching – DeleteCases with a case like float?

Could the DeleteCases Having trouble matching when the list is floats (machine precision numbers)? E.g. this code might not match correctly because float == float is not a good operation:

ClearAll(a);
a = RandomReal({0, 1}, 5)
DeleteCases(a, Max(a))

How would you handle this type of situation? What is the robust shape?

wallet – Use case for multiple entries in a single transaction

Is there a way to determine if a pair or more of Bitcoin addresses belong to the same wallet on the basis that a pair of addresses was used as input for a single transaction?

No.

Assuming all entries to a transaction are from the same wallet is often referred to as a “ common entry property heuristic & # 39; & # 39 ;, but it is possible to create transactions using entries from multiple wallets, so there is no way to apply this heuristic with 100% accuracy. In many cases it can work, but in many cases it will fail.

Can I verify that the address pair entry is not part of a transaction creation transaction or fund raising transactions, for example from a blockchain browser?

No, the transaction data transmitted to the network deliberately lacks information on what software / method was used to prepare the transaction. In some cases, the & # 39; fingerprint of transactions & # 39; It can be used to make an assumption about this, but again it's just an assumption, at best false positives can easily be created as well.

In which case, in addition to createrawtransaction and fundrawtransaction, multiple entries within a single transaction belong to different wallets?

Some examples:

  • Coin transactions (eg Wasabi wallet, join the market)
  • Lightning Network channels (2of2 multisig)
  • Other multi-grade wallets (many possible situations)
  • Payjoin transactions (similar to coinjoins, though not many afaik job implementations)
  • etc.

c ++ – The case against path expressions in #include directives

I am preparing for a discussion with my fellow programmers about using C / C ++ #include directive. The code base I have to adapt to the automotive standards you are using includes the form #include . To be precise: projects include a large set of include paths for the compiler (-Iinclude/me etc.) BUT the path expressions even reach outside these places so that blue.h It can only be found if the compiler internally produces a combination of all include paths with the path in the declaration itself: include/me/+path/out/of/the/blue.h. There are many complaints that I have against this practice:

  • As far as I know <> it is reserved for system headers coming from the platform and it is strongly discouraged to be used by project code. Compilation only works because the C and C ++ standard requires the compiler to search again as if the file had been found with "" if it is not on the first pass.
  • Create a review nightmare: include file is not found in path rooted in directory where C or C ++ file is found and is not in any of the include locations, you need to repeat the compiler search to finally find it in somewhere, however, right now you are not really trusting yourself – the compiler might have searched differently.
  • There are several multiple file names in the project tree: blue.h can be found in several places and sometimes in one blue.h serves as a dispatcher file for inclusion of a more specific, true blue.h down the directory tree. Which blue.h is selected is distinguished by #define PLATFORM macros and the like.
  • Creates a monolithic, reorganization-resistant project structure that couples the C and C ++ interfaces (which live in a languageless directory-free space) with the file system.
  • Extends to new projects: As soon as one uses a header that includes other path-dependent headers, the new project build script should adapt to this use.

We are using mbed-os and it seems that your source tree suffers from the same wrong code structuring option (IMHO).

Like TL; DR you could say that I have a strong belief that it is not advisable to bring the project structure to the source code. Anyway, one has to provide a lot of structure and dependencies to the build system and the linker: introducing a secondary docking by the source files wreaks havoc at least when one tries to change the build system (as I see myself forced to do it now).

What is the public opinion on this? How flat or tree-shaped do you handle your inclusions?

PS: MISRA is only talking remotely about this problem, although one might read it as "use nothing but header file names"

PPS: I'm not completely against the use of routes (well I'm in my code, but I could live with this in legacy code) as long as this is not visible from the outside but the current versions of the projects rather force one to adapt exactly to this use.

postgresql 11: add case statement where clause leads to bad plan choice

We have a lot of procedures that use case declarations in the where clauses; Essentially, we want to run the same procedures at different times, and we set "flags" in the database to change the behavior of the procedures as a function of time, so that we reprocess the least amount of data needed when we execute these procedures. The case statements take those flags into account.

I think it is clear as mud, so let me give an example:
At times, we may want to rebuild the entire contents of a table, so we could run something like

insert into table_a select * from table_b

At other times, we just want to rebuild rows that match certain conditions:

insert into table_a select * from table_b where (business logic)

Queries can get quite complex, and we do this in many different procedures, so it would be bad practice to rewrite all queries for each scenario. So instead we do this:

update flag_table set flag = 't';

then at the top of the procedure we declare a variable:

declare _flag boolean;

. . .

_flag = (select a.flag from from flag_table a);

. . .

insert into table_a select * from table_b where case when _flag then (business logic) end

Obviously, this is a rough approximation of our practices; I also say "procedures", but they are postgres user-defined functions, just to be exhaustive.

The following code recreates the problem that inspired this thread:

drop table if exists t_test; create temp table t_test as
    select  md5(random()::text) as test_text,
            (current_date - (random() * interval '5 years'))::date as date
    from generate_series (1,1000000);

drop table if exists t_sub; create temp table t_sub as
    select * from t_test order by random() limit 100000;

--efficient:
drop table if exists t_result; create temp table t_result as
    select  a.*
    from t_test a
    where   exists
        (
            select 1 from t_sub b where a.test_text = b.test_text and a.date between b.date - interval '6 months' and b.date + interval '6 months'
        );

--inefficient:
drop table if exists t_result; create temp table t_result as
    select  a.*
    from t_test a
    where   case when 1=1 then 
               exists
               (
                   select 1 from t_sub b where a.test_text = b.test_text and a.date between b.date - interval '6 months' and b.date + interval '6 months'
               )
            end;

Here is the query plan for the query labeled efficient:

Hash Semi Join  (cost=7453.88..54133.48 rows=58801 width=36)
  Hash Cond: (a.test_text = b.test_text)
  Join Filter: ((a.date >= (b.date - '6 mons'::interval)) AND (a.date <= (b.date + '6 mons'::interval)))
  ->  Seq Scan on t_test a  (cost=0.00..40086.54 rows=1058418 width=36)
  ->  Hash  (cost=4011.54..4011.54 rows=105918 width=36)
        ->  Seq Scan on t_sub b  (cost=0.00..4011.54 rows=105918 width=36)

Here is the query plan for the query that I have tagged as inefficient:

Seq Scan on t_test a  (cost=0.00..95755427.48 rows=529209 width=36)
  Filter: (SubPlan 1)
  SubPlan 1
    ->  Seq Scan on t_sub b  (cost=0.00..5335.51 rows=59 width=0)
          Filter: ((a.test_text = test_text) AND (a.date >= (date - '6 mons'::interval)) AND (a.date <= (date + '6 mons'::interval)))

The inefficient plan takes forever to execute (I gave up after 30 seconds anyway). In our actual case, we're not actually running this on millions of rows, just 10s of thousands, but due to what I'll call server issues beyond our control, even in our smallest case, the second query took several minutes to complete . So of course my question is, why is a different plan chosen for the second consultation, and is there anything we can do about it?

Please yell at me if I have left something essential; This is just my second question here. Thank you.

Criminal case and civil case ..?

Hello friends,

Please tell me, the differences between a criminal case and a civil case …?

How to keep default Switch case as original function?

For example, under the Switch function.

In[]:= Module[{f},
 f[x_] := Switch[x, 1, 2, 3, 4];
 {f[1], f[3], f[x]}
 ]

Out[]= {2, 4, Switch[x,
  1, 2,
  3, 4]}

f[1] Y f[3] it evaluates as defined, but since it does not define a default case, the original input appears to be a more natural and friendlier result.

Is it possible to define the above? f[x] then the output becomes:

Out[]= {2, 4, f[x]}

I tried to use Unevaluated[f[x]], but it doesn't seem to work.

design: tightly coupled or loose software components, simple case examples

"Freely coupled" is not binary. It is not true or false. It is how much. Some organize this into a hierarchy. Others throw math at him. I measure it with how miserable you're making life for anyone who needs to change this code.

In software engineering, coupling is the degree of interdependence between software modules; a measure of how closely connected two routines or modules are; The strength of the relationships between the modules.

Wikipdedia – Link

The weakest link is no link. B does not know A exists

Let's say knowing Each other is by reference

This is a very very low coupling. It is the type of docking collections they have. In B-->A The only thing B know is where to find A.

B you might also know that A is a A or at least some A like thing I mean B knows AThe type of which is a little more coupled. It can know the type of an import or be in the same package.

But that is still not as coupled B needing to call A.foo() and only after that call A.bar() and depending on what you have returned call A.baz(). Yeesh

Yes now A also knows things about BWell, now you have entered cyclical hell.

is not case sensitive for Google Ngram – Exchange web application stack

For simple cases, Google Ngram case sensitive works fine

enter the image description here

another no

enter the image description here

Obvious Google Ngram distinguishes different cases of types, adverbs and adverbs, which is NOT what I need.

I do not care about uppercase or lowercase, that's why I checked the option "case insensitive".

Could someone give a clue? Thanks in advance.

List of UML use case diagrams: mandatory / optional and independent

Use cases are not intended to have any sequence between them.

The use cases are intended to represent interactions with the system that are of value to the actors (that is, the use cases correspond to the objectives of the actors). UC are in no way detailed specifications of the actions that will take place and their owner. It is just a placeholder for such specifications.

The use cases are, in principle, independent of each other, with the exception of include and the extend relationship, whose objective is to promote the reuse of common behaviors / goals / interactions.

If you want to sequence behaviors, you should consider UML activity diagrams or BPMN diagrams.

guidelines: should we use & # 39; Case Title & # 39; or & # 39; Case of prayer & # 39; for headlines and buttons?

Case title for headers and buttons

It is easier and faster for users if they can identify the forms of the words.

"We recognize words by their form." Also called the Bouma form.

Read more: http://en.wikipedia.org/wiki/Word_recognition

Bouma form: http://en.wikipedia.org/wiki/Word_recognition#Bouma_shape

Some examples

http://www.nytimes.com
http://www.lifehacker.com
http://blog.facebook.com/

English and SE writers is divided

This topic has already been discussed in English and SE writers. The consensus for writers is that it depends on the style guide established by the organization.

If not set, writers can select one and they are fine as long as they are consistent.

If there is a style guide that your organization has subscribed to, look in it. Otherwise, do what you think is right.

Examples of style guides are found in one of the answers:
https://english.stackexchange.com/questions/6560/when-should-you-use-title-case

It's about style and standards. It is more important, in my opinion, to be consistent.

https://writers.stackexchange.com/questions/10399/how-should-i-capitalise-headlines-for-professional-web-writing-sentence-case-v

Note

My own theory is that internet users will always tend to efficiency over grammar correction, which is why I'm a big proponent of Title Case. Why else has Urban Dictionary and Internet Acronyms emerged?

My guess is that most users will spend less time on titles because they are deciding which content is right for them. Once they decide, the rest of the content follows the case of the sentence and they can take their time if they need to.