filesystems – Why has Windows used NTFS for 20+ years, while many different systems have trendend in the linux community over the same time?

I’m a first year MS:CS student, and my data structures class has inspired me to research file systems and their implementations. I recall using ext2, then ReiserFS, then ext3, then ext4, and now btrfs seems like the new thing. I understand (more or less) what changed from each of these, and their relative improvements, but what I don’t understand is how NTFS has stayed relevant during roughly the same period of time (looks like the last major version of NTFS shipped with Windows XP).

Was NTFS simply that well spec’d and designed from the beginning, or has Windows been working around some NTFS deficiencies in the interest of not having to rewrite some core parts of Windows from scratch? If that is the case, why are linux distros much more flexible in changing FS (user can even select a different FS at install time).

cryptography – Why public key systems involve private keys

Public key cryptography means that the entire communication between both parties is public, including the setup. Contrast this with the case of two parties $A,B$ meeting in secret, agreeing on some keyword, and using this keyword to encrypt future communications.

Clearly, if $A,B$ decide on the encrpyption scheme in public, something has to be kept private (otherwise you could decipher the messages just like the parties involved). This is the private key, so the flow is something along the following lines: $A$ and $B$ publicly discuss and share some information with each other and the world, then they do something in private and send each other encrypted messages. Witnesses to the public exchange alone can’t recover what is being said.

The child version of such scheme which I like is the following. Suppose $A$ and $B$ want to agree on some secret color, only known to them, however the entire exchange must be public. Under the assumption that mixing colors is easy, but given a mix recovering its components is hard, then $A$ can send $B$ can each choose a secret (private key) color denoted by $a,b$. Then $A$ can send $B$ the color $c$ (public key), and the mixture $(a,c)$. $B$ now creates the mixture $(b,c)$ and sends it to $A$, and also mixes $(a,b,c)$ and keeps this compound to himself. Finally, $A$ adds $a$ to $(b,c)$ and is now also in the possession of the secret mixture $(a,b,c)$, known to $A,B$ but unknown to anyone who solely witnessed the interaction between them.

matrices – The “best way” to order unknowns in linear systems

Start with a linear system of the form
Ax + Bt + C = 0,

where $x = (x_1, dots, x_n) in mathbb R^n$ is the vector of unknowns, $t in mathbb R^m$ is a vector of parameters, $A in GL(n, mathbb R)$, $B in mathcal M_{n,m}(mathbb R)$ and $C in mathbb R^n$. Suppose you can apply Gauss reduction without pivoting. Then you end up with a system of the form
Ux + Pt + Q = 0,

where $U$ is a $n times n$ unitriangular matrix, $P in mathcal M_{n,m}(mathbb R)$ and $Q in mathbb R^n$.

Of course, you can do this for every permutation of $(x_1, dots, x_n)$ that does not give you zeroes on the diagonal at some point of Gauss reduction. Now my questions are:

  1. Is it possible to know in advance (i.e. without doing the whole computation) which permutation $(x_{i_1}, dots, x_{i_n})$ of $(x_1, dots, x_n)$ does not make Gauss reduction fail because of zeroes on the diagonal? I think this is very classical but I did not find any reference.

  2. Is it possible to know in advance which of the above permutations gives you the triple $(U, P, Q)$ with the greatest number of zeroes?

  3. Does it help if you make some assumptions on the form of $A$? In my situation $A$ is sparse, often it is also symmetric and weakly diagonally dominant.

Thank you in advance.

operating systems – CPU scheduling Decisions

Operating System – CPU scheduling Decisions

The question above talks about why CPU scheduling does not take place when ready to running.

But I wander why CPU scheduling does not take place when new to ready. I think it is similar to the process of waiting to ready. They both add processes to ready without changing processes that are running.

enter image description here

concurrency – Why does taking advantage of locality matter in multithreaded systems?

As we all know, when a given thread/process reaches a memory address it does not have cached, the execution will (for the most part) freeze up until said data is fetched from memory. What I don’t understand, is why in multithreaded systems, we can’t save ourselves the headache of data-oriented design. Why can’t the processor/OS simply do work elsewhere on a different thread until the data is received?

I couldn’t find a good post on this exact question, and this may just be obvious to others. I only know so much about the pipeline and such so there could be a very obvious reason for this, I simply don’t know why.

systems of equations – Square root of irrationals

The Art of Problem Solving: Volume 1 by Sandor Lehoczky and Richard Rusczyk – Example 6-14

We are trying to solve the system xy = -12 and x^2 + 2y^2 = 34, which will eventually help us solve for the square root of an irrational expression.
Solving for y in terms of x in the first equation gets us y = -12/x, and substituting that into the second equation gets us x^4 – 34x^2 + 288
Factoring the equation x^4 – 34x^2 + 288 will get (x^2-16)(x^2-18) = 0. This leads to two integer solutions, 4 and -4, and two irrational solutions 3sqrt(2) and -3sqrt(2). The authors of the book only use the integer solutions. Why can’t we use the irrational solutions?

ds.dynamical systems – Uniformity of convergence in the pointwise ergodic theorem

Definitions and some motivation:

Let $X$ be a compact metric space, and $T$ a uniquely ergodic measure preserving transformation on $X$, with associated invariant ergodic probability measure $mu$. Assume $mu$ is non atomic.

Given a continuous function $f$ on $X$, we know by unique ergodicity that the Birkhoff averages $A_n f := frac{1}{n}sum_{k=0}^{n-1} T^k f$ converge uniformly to the constant function $int_X f dmu$. But how uniform is the convergence with respect to the convergence at other points?

Given a continuous function $f$ on $X$, and $n in mathbb N$, define the error function $E_n: X times mathbb R^+ to mathbb R$ by

$E_n(x, r) := frac{int_{B_r (x)} A_n f dmu}{mu(B_r (x))} -int_X f dmu$.

Define also fof each $delta > 0$, the set $S_delta := { (x, r) | (x, r) in X times mathbb R^+, mu (B_r (x)) geq delta }$.

Question: For fixed continuous $f$, is it true that for all $delta > 0$, we have $
limsup_{n to infty} sup_{(x_1, r_1), (x_2, r_2) in S_delta} frac{E_n (x_1, r_1) – E_n (x_2, r_2) }{E_n (x_1, r_1) + E_n (x_2, r_2)} = 0$

Note: By convention we set $frac{0}{0} = 0$.

operating systems – Renaming of linear list directory

Consider a linear list-based directory implementation in a file system. Each directory is a list of nodes, where each node contains the file name along with the file metadata, such as the list of pointers to the data blocks. Consider a given directory foo.

Which of the following operations will necessarily require a full scan of foo for successful completion?

A.Creation of a new file in foo

B.Deletion of an existing file from foo

C.Renaming of an existing file in foo

D.Opening of an existing file in foo

Here I know that how the creation of a new file necessarily required a full scan. but I’m confused with how the renaming of the file necessarily requires a full scan?
plz, discuss how the renaming is done in the directory structure.

operating systems – What would happen in this priority-based round-robin CPU scheduling algorithm case?

I’ve been implementing some kind of CPU Scheduling Algorithms in Python. And I am really confusing with one of the cases what’d be implemented in this case.

Let us assume that the time quantum = 10
If the first come process which came at 0 has a burst time of 7 (simply denoting that it is less than the time quantum). And let us say at 5 another process with a higher priority comes. In this case, would there be a context switching to the one that came later?

Thanks in advance.

usability testing – How do you prototype systems that are normally connected to Active Directory or other complex external systems?

I am working on a product that has a quite typical setup when it comes to enterprise software: it is usually connected to the Active Directory of the origanization and authenticates its users against it and fetches their group membership information from it. The permissions within the products are assigned to the groups that come from AD. For tiny installations and in test scenarios it is possible to add local users and groups but in production usage it is almost always integrated with Active Directory.

We are planning on making some pretty significant changes in how permission settings can be made and the mockups for the changes tested well when local users & groups were used. We would now like to see if the interface works well in a more realistic scenario when the product is connected to AD and we have thousands of users and groups.

I was wondering on whether you have any experience or insight on how to do users tests in such a situation. Creating and maintaining a fake, internet-facing AD installation seems to be an overkill for this purpose and also cause problems during the test as well as it’d be impossible to connect the real AD with the wireframe we want to test. Creating a mock AD user management interface would also take tons of time and would probably still be quite far from how that UI works normally.

Do you have any experience with this or more generally speaking on doing wireframe tests of systems that are normally connected to large, complex external systems in production?