Read a file txt and calculate the average of students grades in C#

Good morning, I’m trying to build a program in C # that will read the highest grades from a file called Grades.txt in the first step and save it in another file called Bestgrades.txt, in the second step it will calculate the average grades and save it in another AvgGrades .txt file. I am blocked, how to read the maximum and average marks? What I did was create a txt file with student id, last name and grades.

Can someone help me please?

Here is my code :

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
using System.Text;

class Projekt
{
    static void Main(string() args)
    {
        string filePath = @"C:UsersDesktopGrades.txt.txt";
        List<string> lines = File.ReadAllLines(filePath).ToList();
        
        foreach (String line in lines)
        {
            Console.WriteLine(line);
        }
        

        string oldPath = @"C:UsersDesktopGrades.txt.txt"; 
        string newpath = @"C:UsersDesktop"; ;
        string Bestgrades = "Bestgrades.txt.txt ";
        FileInfo f1 = new FileInfo(oldPath);
        if (f1.Exists)
        {
            if (!Directory.Exists(newpath))
            {
                Directory.CreateDirectory(newpath);
            }
            f1.CopyTo(string.Format("{0}{1}{2}", newpath, Bestgrades, f1.Extension));
        }
    }
}   

git – How to calculate lead time to deploy most efficiently?

I want to calculate the lead time to deploy of a team in Azure DevOps.

Lead time is the amount of time it takes a commit to get into production. Thus the code must reach the master branch beforde deployment.

However, following every commit from its original feature branch into the master from where it gets deployed sounds very clumsy.

So my question is: Is my complicated thinking necessary or is there a more efficient way of calculating the timespan between first commit in feature branch and merge into master where this commit would transitively be also part of?

Thank you very much in advance.

Can Bitcoin.org precalculate bitcoin history and distribute it instead of requiring every new user calculate it for hours?

I am trying to set up a desktop client of Bitcoin Core, downloaded from Bitcoin.org. The process requires me to download about 350 GB of data of all bitcoin transactions, but there is an option to prune it to size of user choice like 2 or 4 GB. Since the history is the same for all downloads, why not precalculate these 2 or 4 GB and distribute it to new users? Why it is required that they process all the data by their computer?

algorithms – Using chebyshev polynomials to calculate LOG, EXP and TRIG functions

I originally posted this over on Software Engineering, but it was suggested that I repost it here.

I am working on a BCD Floating point library and I’m a bit stuck on the EXP (antilog) functions. The goal is to maintain accuracy for the entire length of the number which could exceed 100 significant digits.

So the requirements are fast and accurate to N digits.

I have been experimenting with the Taylor series and found that it will yield an answer, but requires about 2 iterations per digit. It takes two multiplies and one divide per iteration. Plus I am not satisfied that it is all that accurate because it seems to converge on the correct answer only to drift away.

I would like to know if this can be done with chebyshev polynomials to converge on the correct answer in fewer iterations and clock cycles. I do have a chebyshev example, but it is limited to 8 iterations and about 9 digits of accuracy. I found no explanation as to how to expand it for more significant digits.

Does anybody have a chebyshev, or better, algorithm that can calculate EXP (any base)? Please no complicated math proofs as they are likely over my head.

algorithms – How to use chebyshev polynomials to calculate exponents (antilogs)?

I am trying to write an algorithm to accurately calculate exponents (antilogs) for a variable precision floating point library I am working on. The base is not relevant since I can convert between them.

I was able to manually calculate log10() using repetitive application of x^10. This is a digit by digit calculation and requires 4 multiplies per digit. I can reverse the algorithm to calculate exp10(), but this requires repeated application of a 10th root. Calculating the 10th root is significantly more CPU costly than 10th power.

I searched the web and a lot of people suggested using a Taylor Series to calculate exp_e(). I did that and found that it requires about 2 iterations per digit for accurate results. Only two multiplies and one divide per iteration. This is still a bit steep in terms of CPU cycles especially when some FP numbers can be 100 digits long.

Now, I also found the algorithm that was used to calculate EXP in the old Sinclair ZX81. The author claimed that it was Chebyshev polynomials. I mention this because when I tested it, the algorithm was calculating accurately to one digit per iteration – much better than the Taylor Series.

I would use the algorithm as-is if it weren’t for the fact that the floating point library has to be accurate to an arbitrary number of digits. The ZX81 EXP code is only accurate to 8 digits. There is no explanation as to how to extend the number of iterations to get more accuracy.

So does anyone know how to calculate EXP() using Chebyshev Polynomials? Can they be expanded like the Taylor Series for more accuracy? Anything better than either?

(Please no long math proofs. That’s over my head. I just want the algorithms.)

algorithms – How to calculate the basic steps in Fibonacci sequences to get nFn and n^2

Let’s start with the second function, fib2. If you count the number of operations, then you get only $O(n)$. So why is the running time given as $Theta(n^2)$? The reason is that the Fibonacci numbers grow exponentially, and so addition can no longer be considered an $O(1)$ operation. The $n$-th Fibonacci number is $Theta(n)$ bits in length, and this leads to a running time proportional to $sum_{i=2}^n i = Theta(n^2)$.

As for fib1, we can easily write a recurrence for the running time $T(n)$:
$$
T(n) = begin{cases}
O(1) & text{if } n leq 1, \
T(n-1) + T(n-2) + Theta(n) & text{if } n geq 2.
end{cases}
$$

For the sake of asymptotic analysis, we can replace $Theta(n)$ with $n$ and consider $S(n) = T(n)/n$, which follows the recurrence
$$
S(n) = begin{cases}
O(1) & text{if } n leq 2, \
S(n-1) + S(n-2) + 1 & text{if } n geq 3.
end{cases}
$$

Thus $R(n) = S(n) + 1$ satisfies $R(n) = R(n-1) + R(n-2)$.
Choosing appropriate initial values (which is fine, for the sake of asymptotic analysis), we find out that $R(n) = F_n$, and so $T(n) = nS(n) = n(R(n) – 1) = Theta(nF_n)$.

physics – How to calculate distance covered by the beam? (friction force problem)

Question:
A uniform beam of weight W and length L is initially in position AB. As the cable is pulled over the pulley C, the beam first slides on the floor and is then raised, with its end A still sliding. If $mu$ be the coefficient of friction between the beam and the floor, calculate the distance a that the beam will slide before it will begin to rise. Ans. $L(1-mu)$

My try:
I drew the F.B.D. like this,

Now, as the beam slides, so,
Force for which the beam slides F = $Psin 45^0-mu N$
and from equation of equilibrium,
$$Sigma F_y=0\
implies Pcos 45^0+N=W\
implies N=W-Pcos 45^0$$

$therefore F = Psin 45^0-mu (W-Pcos 45^0)\
implies frac Wgtimes a’=Psin 45^0-mu (W-Pcos 45^0)\
implies a’ = frac gW(Psin 45^0-mu (W-Pcos 45^0))$

But, I can’t understand how I can relate the distance a with the acceleration a’, as there is no final velocity given. Please anyone suggest any approach.

google sheets – How can I calculate the difference between two time fields, being the second greatest than the first one?

I’m having trouble getting negative hour as a result in Excel. For instance:

G           H
08:30       06:00

I would like to get a column with the difference between H and G. So I would be filled with -02:30.

When I do a simple calculation, like G2-H2, I get 22:00 as a result.

Any ideas on how to fix this?

I saw this post How to handle Negative Time Delta in Google Spreadsheets, but none of them have the solution I’m looking for.

Calculate hockey puck bouncing off rubber bumper

I am very new to game development (or any type of development for that matter) and I am trying to figure out how I would calculate a hockey type puck bouncing off a rubber bumper placed at a 45 degree angle. The puck could hit the bumper at any angle during play. Any help with this is greatly appreciated.

Given a random variable whose density is given by a uniform distribution, can we use the expected value to calculate another expected value?

If given a random variable X whose density is given by the following uniform distribution, p(x)=

( begin{cases}
1 & if 0 <x < 40 \
\
0 & elsewise
end{cases}

)

the price of x is x^2. Say we wanted to find the expected cost would we integrate like this

$int_{0}^{40} x^2 * p(x) ,dx$

and take our final answer as the result of this or like this?

$int_{0}^{40} x * p(x) ,dx$

and then we would square the latter to get the cost?

They seem very similar yet they yield different results. Can anyone explain what the procedure is here and why?