operating systems – Is it possible to make a common PC bluetooth card identify itself as headset?

My objective would be to make an application that makes the computer identify as a headset, so I can connect my phone to it and route the audio of the calls to the computer.

I think this is highly related to security. I’m talking about the ability to make a device identify as something else, think about the USB rubber ducky, now replace “USB” with “Bluetooth”.

That’s why I posted here. The purpose of doing this would not be anything malicious, I just want to connect my phone to the PC so I can hear the voice of the person calling me on my phone, through the headset connected to my PC:

Phone -> Bluetooth -> Computer -> Headset

algorithm analysis – Course teaching time complexities in real life systems

Having mis-read What course in CS deals with the study of RAM, CPU, Storage? I now wonder what course in CS deals with time complexities including GPUs, CPU caches in multiple levels, seek times on hard disk vs. SSDs, and bandwidth to disk and RAM.

I was taught the big O-notation but it never took into account that I might have a GPU with 100s of cores, or a limited amount of extremely fast cache, or a harddisk that is has a high bandwidth, but a high seek time.

Which class teaches this extended version of algorithm time complexities, which takes real world limitations into account?

How critical is encryption-at-rest for public cloud hosted systems

I wok as a solutions architect for web based systems on AWS and as part of this role often respond to Information Security questionnaires. Nearly all questionnaires request information about data encryption at-rest and in-transit. However only a much smaller percentage ask about other security aspects, such as password policies or common web application security issues, as published by OWASP.

I wonder how common/ likely accessing of clients data is within a public cloud provider such as AWS, Azure and GCP. It seems a very high barrier to pass for an external party, even data centers of small local web hosting companies seem to have very good physical access security. And informal conversations with bank employees tell me that accessing someone’s bank account without reason leads to instant dismissal, so surely public cloud providers would have similar controls in place?

This is not to challenge the value of encryption at rest, it is very cheap to access, so there is no reason not to enable it, but where does it sit in terms of priorities?

distributed systems – Why is the ‘Integrity’ property required in consensus protocols?

Formally a consensus protocol must satisfy the following three properties:

Termination

  • Eventually, every correct process decides some value.

Integrity

  • If all the correct processes proposed the same value “v”, then any correct process must decide “v”.

Agreement

  • Every correct process must agree on the same value.

“Termination” certifies the protocol is resilient to halting failures. “Agreement” deters any two correct processes from deciding on different values which would break consensus. But what about “Integrity”, why is it required? If all correct processes propose “x” but then they all decide “y” (e.g. f(x) = y), is that a violation of consensus?

operating systems – Why can’t we compile 8086 Assembily for all OSs from any OS?

Porting to a different operating system is about a lot more than the particular assembly language you use. Different operating systems have different system calls, different libraries, and different APIs, and converting from one to another is usually not something that can be done in an automated way — even if both used exactly the same assembly language. Most programs do need to access the filesystem, the network, display things on the screen, and so forth, and all of those require interaction with the OS and libraries.

control systems – State Space Model in Controller Canonical Form

Mathematica by default puts state space model realizations in controllable companion form, as seen here:

tfsys = TransferFunctionModel((b1 s^2 + b2 s + b3)/(s^3 + a1 s^2 + 
      a2 s + a3 ), s);
StateSpaceModel(tfsys)

Which outputs a block matrix like:
$$
begin{bmatrix}0 & 1 & 0 & 0 \ 0 & 0 & 1 & 0 \ -a_3 & -a_2 & -a_1 & 1 \ b_3 & b_2 & b_1 & 0end{bmatrix}
$$

However, I want it in controller canonical form, which should look like:
$$
begin{bmatrix}-a_1 & -a_2 & -a_3 & 1 \ 1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 \ b_1 & b_2 & b_3 & 0end{bmatrix}
$$

StateSpaceModel offers the StateSpaceRealization option but it only has ControllableCompanion and ObservableCompanion, neither of which is what I want. Is there a simple way of getting the right state space form?

Equinix's internal systems hit by ransomware attack

Data center and colocation giant Equinix has been hit with a Netwalker ransomware attack where threat actors are demanding $4.5 million for … | Read the rest of https://www.webhostingtalk.com/showthread.php?t=1821631&goto=newpost

learning – Which two of these four subjects should I take if my goal is to do distributed systems software engineering?

It’s my last semester of my master’s programme and I’m really struggling to decide which of two of these four subjects I should take:

  1. Cryptography
  2. Computer & Network Security
  3. Software Foundations (logic, formal verification and PL foundations using Coq proof assistant – same as this)
  4. Statistical Inference (same as mathematical statistics)

My goal is to become a software engineer doing distributed stuff, preferably in the blockchain or ML space (leaning more blockchain).

I’ve done a few systems courses (OS, Distributed systems, Networks) and ML/Math related subjects (ML, NLP, Calc 1-3, Linear Alg, Probability), as well as functional programming in Haskell.

I find all 4 that are interesting and struggling to pick two. Which two should I pick and why?

formal languages – Use of graph grammars/rewriting systems in compilers?

A(n imperative) program – in a higher-level language and more importantly in assembly language or intermediate representations like LLVM – can be formalized as a directed “port graph”, in which vertices are instructions and ports correspond to input/output operands/arguments. An optimization applied of a program’s code therefore corresponds to a rewrite applied to its port digraph.

Now, graph rewriting is a small but somewhat-active area, in itself. What I’m wondering if these kinds of systems have been actually put to explicit use in the context of a compiler. That is, representing the optimization phase as a rewriting process, or a derivation using a port-graph grammar.

I’m not much of a “compilers guy” – I took a basic course on them in my undergraduate degree, and I know about LLVM and its IR – but it seems to me that graph rewriting systems is the “obvious” formalism to use; and at the same time – I don’t see almost any FOSS projects involving such systems, nor do the papers about them discuss their use in compilers.

Note: I’m more interested in practical-use, popular-language compilers more than academia-only systems, but I’ll take what I can get.

25% of Microsoft systems users are still using windows 7

Even with windows 7 being out of support, a survey showed that 25% of microsoft systems users are still using windows 7 for many reasons.
Back when it got dropped from the support, Microsoft lost 10% from its share from the market, but the system was still in use even now, People decided to keep choosing it over windows 10 for several reasons such as : intensive hardware usage in most of the cases, a lot of internet consuming apps in the background and the complicated usage sometimes.