keyboard – Reason for difference in behaviour of autcomplete interactions in input fields

There seems to be two different schools of thought when it comes to the autocompletion of input fields and I am wondering if there is a particular reason or rationale for each.

Basically, when the user is completing an input field (usually text based) it is typical for autocomplete functionalities to be implemented when there has been previous inputs provided. The user can then choose to continue completing the input or to use the autocomplete feature by:

  • Pressing the ENTER key
  • Pressing the TAB key

Alternatively they can use the mouse and click on the autocomplete suggestion but it breaks the flow of typing from the keyboard.

Is there any particular reason (both from a technical AND user perspective) why some sites use the ENTER key while others use the TAB key for this? I think it creates a confusion because it is likely for a user to go to websites that implement this feature both ways.

Is there a reason to pick options besides "terrorism" in Facebook reports?

All other options take more clicks and/or don’t actually lead to a report. With that in mind, clicking terrorism is more efficient and as far as I know still counts as a report for the algorithm.

Is there a reason to use the other options?

CertGetCertificateChain doesn’t recognise revoked certificate if the reason is “unspecified”

In my program I use CertGetCertificateChain to investigate the validity of certificates.
If in my test PKI I revoke a certificate and specify the reason “unspecified”, the error code in the last parameter pChainContext->TrustStatus.dwErrorStatus is zero, meaning no error, the certificate is not considered revoked. However, in the Windows Event Log I can see the following entry:

ETW unspecified

So, the revocation and its reason got detected correctly, however CertGetCertificateChain doesn’t let me know about it.

If I revoke the certificate with any other reason (e.g. “cessation of operation”), CertGetCertificateChain correctly returns pChainContext->TrustStatus.dwErrorStatus == 4 which means ‘CERT_TRUST_IS_REVOKED’ and in the eventlog I can see this:

ETW cessation of operation

So my question is: Is this behavior of CertGetCertificateChain correct?

I spent some time researching this and I found this document. In section 6.3.2 (a) it says:

reasons_mask: This variable contains the set of revocation
reasons supported by the CRLs and delta CRLs processed so
far. The legal members of the set are the possible
revocation reason values minus unspecified: keyCompromise,
cACompromise, affiliationChanged, superseded,
cessationOfOperation, certificateHold, privilegeWithdrawn,
and aACompromise. The special value all-reasons is used to
denote the set of all legal members. This variable is
initialized to the empty set.

(emphasis is mine)

I’m not sure how to interpret this. Does it mean that the described algorithm must not consider the “unspecified” reason as revoked? If so, this would mean that CertGetCertificateChain behaves correctly. But then some followup questions arise: Shouldn’t there be some big, all capital, blinking warning along the lines of

“If you revoke a certificate, NEVER EVER choose “unspecified” as reason because otherwise it won’t be considered revoked.”?

But maybe I’m not reading this correctly, so here are some other blind guesses of mine why CertGetCertificateChain doesn’t work like expected:

  • Maybe I need to configure my CRL to support the “unspecified” reason. But I don’t see where I can configure that.
  • Do I have to pass some extra flags to CertGetCertificateChain to make it consider the “unspecified” reason? I cannot see any flags that sound suitable…
  • Am I trying to solve a not existing problem? Maybe literally nobody uses the “unspecified” reason and that’s why I find so little information about it?

Can anyone shed some light on this?

Is there any reason other than (effectively) extortion for Google to stop security updates for 2 year old versions of Android?

I have a first-gen Google Pixel. It came with Android 9 (Oreo), then after I’d had it a while, upgraded itself to 10 (Pie). Just about the time Android 11 (Q?) came out, I was notified that Google would no longer supply updates (not even security updates, it seems) for my phone — then not yet quite two years old.

Sure, most people buy a new phone as soon as the purchase agreement runs out on their old one. I, however, bought the Pixel originally with the idea that, if I was going to spend $750 for a phone, I’d make it as future resistant as possible. I bought the best camera, fastest processor, and largest storage (128 GB) that I could find, which is how I wound up with a Pixel (Apple wasn’t in contention, because Apple).

Now, however, I have to choose between buying a new Pixel 4a (for less than half what I paid for my original Pixel), essentially the same phone I have but with very slightly upgraded specs (camera hardware supports longer exposures, CPU is slightly faster), in order to get two years of updates on a new Android version, buying a Pixel 5 or something else to get a significant hardware upgrade (and paying as much as I did originally) — or continuing to carry and use my original Pixel, which still works fine, holds charge well, and is nowhere even vaguely close to full.

The only reason I have to think about this is that I’m apparently no longer eligible for security updates, meaning there’s the possibility that a flaw that’s been fixed for years in newer versions might let someone take control of my phone, steal my photos and storeds passwords, take over my email, etc.

Is there any sensible reason (other than wanting more of my money) for Google to have stopped even security updates after a mere two years from release? Closely related: is there a way to upgrade manually, legally, with a non-rooted phone, and have the upgraded system work correctly with my hardware (camera etc.)?

The phone is on Verizon, if that matters (not sure why it would, but not sure it wouldn’t).

is there a reason for FB messenger to request the captive portal page (evil twin testing)?

while trying evil twin and monitoring the traffic log on my phone using http canary app … i noticed on the second the phone connects to the evil ap network .. facebook messenger requests the page .. while other apps requests normal pages that receives 302 redirect code to the captive portal page which is the normal … so why would FB messenger app do with the actual page without showing it to me or anything.

enter image description here

equation solving – Not understanding the reason for not getting full rank matrix

I have a structure shown in figure. It has four members. Each member has represented by two displacement field $W_i$ and $U_i$. I have expressed this displacement field using some functions. I found out the potential energy $v_i$ and kinetic energy $t_i$ associated with these members. Na I have add them together to form total potential energy $V$ and total kinetic energy $T$. Using this total energy I have constructed Lagrangian $Lg$ of this system. I have clearly mentioned the direction of the displacement of these functions using arrows in the figure. To satisfy this displacement I have imposed some kinematic constraints at the location $Z_i$. These are $W_1(x=z_i)=U_{1+i}(x=gamma)$ , $U_1(x=z_i)=W_{1+i}(x=gamma)$ and $frac{dW_1(x=z_i)}{dx}=frac{dW_{i+1}(x=gamma)}{dx}$. These constraints are added in the Lagrangian $Lg$. The modified $Lg$ is minimized with respect to unknown constants (for $W_{i}$ these are $a,b,c,d$) (for $U_{i}$ these are $e,f,g,h$) and the kinematic constraints are imposed with some unknow constants $lambda_{i},alpha_i,beta_{i}$. After minimization, I formed the matrix out of it. I checked The Rank of the matrix it is coming as 16. But the size of the matrix is 17. I don’t know what mistake I am doing.

ClearAll("Global`*");
SetDirectory(NotebookDirectory());

Y = 2*^11;
ρ = 7850;
aa = 0.1*0.1;
Iyy = 0.1^4/12;
L1 = 4;

z(1) = L1/4;
z(2) = (2*L1)/4;
z(3) = (3*L1)/4;

γ = 0.5*L1;

k = 1;
beta1 = {1.8751, 4.69409, 7.85476, 10.9955, 14.1372, 17.2788};
mm = Table(((Cos(β*x) - 
        Cosh(β*
          x)) - (((Cos(β*γ) + 
           Cosh(β*γ))/(Sin(β*γ) + 
           Sinh(β*γ)))*(Sin(β*x) - 
          Sinh(β*x)))) /. {β -> beta1((i))/γ}, {i, 
    1, k});
nn = Table(((Cos(β*x) - 
        Cosh(β*
          x)) - (((Cos(β*L1) + 
           Cosh(β*L1))/(Sin(β*L1) + 
           Sinh(β*L1)))*(Sin(β*x) - 
          Sinh(β*x)))) /. {β -> beta1((i))/L1}, {i, 1, k});

beamV = Flatten({nn});

varbeam1 = Table(Subscript(a, i), {i, 1, k});
W(1) = Total(Table(Subscript(a, i)*beamV((i)), {i, 1, k}));
W1xx = Expand(D(W(1), {x, 2}));
v(1) = 0.5*Y*Iyy*Integrate(Expand((W1xx)^2), {x, 0, L1})
t(1) = 0.5*ρ*aa*ω^2 Integrate(Expand((W(1))^2), {x, 0, L1})


varbeam2 = Table(Subscript(b, i), {i, 1, k});
W(2) = Total(Table(Subscript(b, i)*beamV((i)), {i, 1, k}));
W2xx = Expand(D(W(2), {x, 2}));
v(2) = 0.5*Y*Iyy*Integrate(Expand((W2xx)^2), {x, 0, γ})
t(2) = 0.5*ρ*
  aa*ω^2 Integrate(Expand((W(2))^2), {x, 0, γ})

W(3) = W(2) /. b -> c;
W(4) = W(2) /. b -> d;

varbeam3 = varbeam2 /. b -> c;
varbeam4 = varbeam2 /. b -> d;

v(3) = v(2) /. b -> c;
v(4) = v(2) /. b -> d;

t(3) = t(2) /. b -> c;
t(4) = t(2) /. b -> d;

soft = Table(Sin(((2*i + 1)*π*x)/(2*L1)), {i, 0, 0});
barH = Flatten({soft});

Table(Plot(barH(( i)), {x1, 0, γ}), {i, 1, Length(barH)});

U(1) = Expand(
  Total(Table(Subscript(e, i)*barH((i)), {i, 1, Length(barH)})))
varbar1 = Table(Subscript(e, i), {i, 1, Length(barH)});
U1x = Expand(D(U(1), {x, 1}));

v(5) = 0.5*aa*Y (Integrate(Expand((U1x)^2), {x, 0, L1}))
t(5) = 0.5*ρ*
  aa*ω^2*(Integrate(Expand((U(1))^2), {x, 0, L1}))

vbsf = Table(Sin(((2*i + 1)*π*x)/(2*γ)), {i, 0, 0});
barV = Flatten({vbsf});

Table(Plot(barV(( i)), {x, 0, γ}), {i, 1, Length(barV)});

U(2) = Expand(
  Total(Table(Subscript(f, i)*barV((i)), {i, 1, Length(barV)})))
varbar2 = Table(Subscript(f, i), {i, 1, Length(barV)});
U2x = Expand(D(U(2), {x, 1}));

v(6) = 0.5*aa*Y*(Integrate(Expand((U2x)^2), {x, 0, γ}))
t(6) = 0.5*ρ*
  aa*ω^2*(Integrate(Expand((U(2))^2), {x, 0, γ}))

U(3) = U(2) /. f -> g;
U(4) = U(2) /. f -> h;

varbar3 = varbar2 /. f -> g;
varbar4 = varbar2 /. f -> h;

v(7) = v(6) /. f -> g;
v(8) = v(6) /. f -> h;

t(7) = t(6) /. f -> g;
t(8) = t(6) /. f -> h;

(*construction of lagrangian*)
T = Sum(t(i), {i, 8});
V = Sum(v(i), {i, 8});

n = 3;

dispcon1 = 
  Total(Table(
    Subscript(λ, 
     i)*((W(1) /. x -> z(i)) - (U(i + 1) /. x -> γ)), {i, 1, 
     n}));

dispcon2 = 
  Total(Table(
    Subscript(α, 
     i)*((U(1) /. x -> z(i)) - (W(i + 1) /. x -> γ)), {i, 1, 
     n}));

slopcon = 
 Total(Table(
   Subscript(β, 
    i)*(((D(W(1), {x, 1}) /. x -> (z(i) - 0.001))) - (D(
         W(i + 1), {x, 1}) /. x -> (γ - 0.001))), {i, 1, n}))

mp1 = Table(Subscript(λ, i), {i, 1, n});
mp2 = Table(Subscript(α, i), {i, 1, n});
mp3 = Table(Subscript(β, i), {i, 1, n});


varbeam = {varbeam1, varbeam2, varbeam3, varbeam4};
varbar = {varbar1, varbar2, varbar3, varbar4};
var = Flatten({varbeam, varbar, mp1, mp2, mp3})

Lg = (T - V) + dispcon1 + dispcon2 + slopcon
eq = Table(D(Lg, {var((i)), 1}), {i, 1, Length(var)})
Rarz = Normal@CoefficientArrays(eq, var)((2));
MatrixForm(Rarz)
Dimensions(Rarz)
MatrixRank(Rarz)

enter image description here

operating systems – Is there any reason I shouldn’t use a LInux host for a Linux guest VM?

Using a Linux-on-Linux virtual machine is perfectly fine and can make sense for better configuration management.

For example, you can keep a fairly stable, lean, and secure host system like Debian and go wild with different tools within a VM – before doing something that could go wrong like updating a distro or installing experimental drivers, you can back up the VM image.

The drawback would be that hardware-related things are more difficult in the VM, but whether that matters depends on your work. And the general inconvenience of introducing another tool (virtualization) where none is needed.

But you should not reuse a VM from your previous employer. There’s a substantial chance you might leak confidential information that way. Definitely set up the new development environment from scratch, but consider scripting it this time for better reproducibility.

An anecdote on why using a VM could be sensible.
This year I worked on a project that dealt with low-level system programming.
The software was known to work on a specific libc version.
I didn’t use a VM and worked on the same system I used for other tasks like writing emails.
Unfortunately, my Linux distro was reaching it’s end of life – but I couldn’t update without breaking the project!
This lead to a situation where I was mayyybe running a slightly out of date and insecure system for a few weeks.
Using a VM would have prevented this – I could have kept the host system up to date and developed the project in an isolated environment.
The next time I need a development system with a particular configuration (rather than a testing or production system that can be more easily managed with Docker), I’ll definitely spin up a VM.

man in the middle – At times bettercap ARP sniffing works great and at times not at all, what would be the reason?

I like to track the websites my daughter goes to in order to have some control. So I installed bettercap and setup a script to start it to sniff the HTML URLs being accessed (well, the reverse URL from the IP really).

sudo bettercap --eval 'set events.stream.time.format 2006-01-02 15:04:05;
                       set arp.spoof.targets 192.168.n.m;
                       arp.spoof on;
                       net.sniff on'

Note: the command is a single line (no new-line), I added new lines here for clarity.

The result is a large list of URLs as she hits one website or another. Especially, I see a ton of marketing websites (darn!). But at times I just see the messages:

endpoint detected as

and

end point lost

(the messages include the IP address and device name, in general).

So even though the end points are properly detected, no other data comes through.

My network looks more or less like this:

+--------+   +-------+
| Laptop |   | Phone |
+---+----+   +---+---+
    |            |
    |            |
    |            v
    |      +----------+
    +----->| WiFi Hub |
           +-----+----+
                 |            +-------------------+
                 |            | Main Server       |
                 v            |                   |
           +----------+       |   +-------------+ |
           | Switch   |<------+   | Kali Linux  | |
           +----------+       |   | (bettercap) | |
                 ^            |   | VPS         | |       +--------+
                 |            |   +-------------+ +------>| Router +----> Internet
                 |            |                   |       +--------+
                 |            +-------------------+
           +-----+-----+
           | Laptop    |
           | (Wired)   |
           +-----------+

So all the traffic from all the machines do go through the Main Server using the FORWARD capability of the Linux firewall. In other words the computers to the left are all isolated (they can still communicate between each others but not directly to the main server, the main server can connect to some of them, though). So the network is rather secure.

Since it worked before I would imagine that the script is correct, but still, there is something that makes Kali bettercap work or fail just like that. I’m not too sure what I would need to do to make it work every time I reboot without having to fiddle with it (although this time the fiddling didn’t help, it’s still not tracking anything).

Is there a technical reason for modern on-camera flashes’ flash duration?

While trying to work out fill-in flash semantics, it struck me that for the purpose of “overpowering the sun”, a short flash duration is rather important since it allows you to open aperture up while decreasing exposure time, keeping the overall brightness while increasing the effect of the flash.

It turns out that my ancient (from the 80s I think) Regula Variant flash specifies 1/1000s flash duration at full power (for which it has guide number 40 at its native f=35mm light angle and up to guide number 70 at f=200mm when using a separate “tele lens” attachment with a fresnel lens).

Going through a list of Metz flashes of similar age and power (including wand flashes with about double the power output), the full power flash duration pretty uniformly ends up as 1/200s. Requiring 5 times the exposure time makes shadow lifting at a distance quite more tricky. Particularly with fast leaf or electronic shutters, it significantly impacts the efficiency of a flash to deal with competing-light situations.

A current-time Godox TT685 has a guide number of 60m at f=200mm (if we consider the Regula specs a bit optimistic, that may be comparable) and a specified duration of 1/300s.

So what gives with regard to the large difference in specification? It cannot be the switch from thyristor technology to IGBT since the older Metz flashes still use thyristors as well. Is the flash bulb different possibly (the size factor seems the same as comparable cobra head flashes today) or driven outside of its comfortable specs?

In analog camera times, the utility in “overpowering light” situations would have been more limited due to larger flash sync speeds, so the main utility of such specs would seem to have been motion freeze in the dark. With modern flashes (and modern superzooms), large reach at short flash time would seem at least as important as it had been at old times.

Why were those kind of specs generally given up on for consumer-level on-camera flashes?

magento2 – Find reason for ‘main.ERROR: Cannot decode binary [] []’

We have a very general error in the system.log file.It is

main.ERROR: Cannot decode binary [] []

It happens very frequent. However we do not know the cause or how to debug.
If anybody could tell us what is a good way to find the reason so we can solve that would be great.