layout: positioning of div elements without server calculations

I have created a site that has a dynamically created menu, which pulls tabs, according to some fixes in the backend. Selecting a tab retrieves the entire page, with different CSS classes applied, depending on the tab selected.

Basically i'm using style="position: absolute; left: xxxxpx" where he xxxx it is concatenated in the backend.

I want to change the site so that the graphical user interface is more independent of the backend.

I'm not sure how to go about changing this so all design calculations are done using CSS. I was thinking of doing it with css variables and keeping the same essential code, but I am not sure how to determine which node and therefore how far should be offset.

First tab selected

First tab selected

Administration
Design ManagerProject Manager

3rd tab selected

3rd tab selected


Design ManagerProject Manager
Project Manager

CSS

.tab-admin{
    background-color: var(--admin-tab-bg);
}

.tab-design{
    background-color: var(--design-tab-bg);
}

.tab-project{
    background-color: var(--project-tab-bg);
}

.topTabSelected {
    position: absolute;
    top: 31px;
    z-index: 8;
    color: white;
    font-weight: bold;
    border-left: 1px solid black;
    border-top: 1px solid black;
    border-right: 1px solid black;
    border-bottom: transparent;
    padding: 3px;
    border-radius: 5px 5px 0 0;
    width: 120px;
    height: 17px;
    text-align: center;
}

.topTab {
    position: absolute;
    top: 31px;
    z-index: 2;
    color: white;
    font-weight: bold;
    border-left: 1px solid black;
    border-top: 1px solid black;
    border-right: 1px solid black;
    padding: 3px;
    border-radius: 5px 5px 0 0;
    width: 120px;
    text-align: center;
}

.topTab a {
    text-decoration: none;
    color: white;
}

.topTab a:hover {
    text-decoration: underline;
    color: white;
}

Backend code to calculate positions:

$tabwidth = 132;
$widthmodifier = 200;

for ($mainpageCounter = 1; $mainpageCounter <= count($mainpage); $mainpageCounter++) {
        $topTabDivStart = "n
"; $topTabDivEnd = "
n"; if ($p == $mainpageCounter) { echo $topTabDivStart . $mainpage($mainpageCounter)("title") . $topTabDivEnd; } else { echo $topTabDivStart . "" . $mainpage($mainpageCounter)("title") . " $topTabDivEnd"; } $widthmodifier += $tabwidth; }

functions: how does following two (same) calculations give two different results?

I have the following two pieces of code that give me two different results,

  N((-Kp tp + Lc - tp Lc)/(Kp tp)) /. {Lc -> 6, tc -> 0.8, tp -> 0.2, Kp -> 1/3}

Which gives the answer as 23 (The correct answer).

(-Kp tp + Lc - tp Lc)/( Kp tp) /. {Lc -> 6, tc -> 0.8, tp -> 0.2, Kp -> 1/3}

Which gives the answer as 3 (which is obviously wrong).

Could someone explain how / why this happens and how to avoid this (possible) error?

Below is the image of the calculation on my machine:
Calculations on my machine

design patterns – How to structure multiple OOP calculations?

I am currently working on a project that requires a series (almost 86) of calculations to be run based on user input. The problem is that each calculation has a series of requirements:

  • I should be able to hold a version variable to distinguish the changes in each implementation of the calculation algorithm. In this way, every time we modify an algorithm, we know which version was used in the specific calculation.
  • It must be able to load specific data from other modules within the application (that is, we have 8 entities) so that each one can choose the necessary information for its operation.
  • You should be able to determine if it is "executable", and by which we would write a function (?) which verifies that the extracted data (from the previous requirement) meets some custom criteria for each calculation that guarantees that the algorithm will execute correctly.
  • Each must have a different algorithm implementation.
  • Generate and store a series of execution metrics (logs), such as data fetch time, algorithm runtime, and sampleSize This refers to the amount of data loaded to execute each specific calculation.

Currently what I have done is: create an abstract class Calculation with this structure:

abstract class Calculation {
  /**
 * Logging Variables.
   */
  private initialDataFetchTime: Date;
  private finalDataFetchTime: Date;
  private initialAlgorithmTime: Date;
  private finalAlgorithmTime: Date;

  // Final result holding variable.
  private finalResult: T;

  // The coverage status for this calculation.
  private coverage: boolean;

  // Data to use within the algorithm.
  private data: F;

  // The version of the Calculation.
  public abstract version: string;

  // The form data from the User to be used.
  public static formData: FormData;

  /**
 * This is the abstract function to be implemented with
 * the operation to be performed with the data. Always
 * called after `loadData()`.
   */
  public abstract async algorithm(): Promise;

  /**
 * This function should implement the data fetching
 * for this particular calculation. This function is always
 * called before `calculation()`.
   */
  public abstract async fetchData(): Promise;

  /**
 * This is the abstract function that checks
 * if enough information is met to perform the
 * calculation. This function is called always
 * after `loadData()`.
   */
  public abstract async coverageValidation(): Promise;

  /**
 * This is the public member function that is called
 * to perform the data-fetching operations of the
 * calculation. This is the first function to call.
   */
  public async loadData(): Promise {
    // Get the initial time.
    this.initialDataFetchTime = new Date();

    /**
     * Here we run the data-fetching implementation for
     * this particular calculation.
     */
    this.data = await this.fetchData();

    // Store the final time.
    this.finalDataFetchTime = new Date();
  }

  /**
 * This is the public member function that is called
 * to perform the calculation on this field. This is
 * the last function to be called.
   */
  public async calculation(): Promise {
    // Get the initial time.
    this.initialAlgorithmTime = new Date();

    /**
     * Here we run the algorithmic implementation for
     * this particular calculation.
     */
    this.finalResult = await this.algorithm();

    // Store the final time.
    this.finalAlgorithmTime = new Date();

    // Return the result.
    return this.finalResult;
  }

  /**
 * This is the public member function that is called
 * to perform the coverage-checking of this calculation.
 * This function should be called after the `loadData()`
 * and before `calculation()`.
   */
  public async coverageCheck(): Promise {
    // Execute the check function.
    this.coverage = await this.coverageValidation();

    // Return result.
    return this.coverage;
  }

  /**
 * Set FormData statically to be used across calculations.¡
   */
  public static setFormData(formData: FormData): FormData {
    // Store report.
    this.formData = formData;

    // Return report.
    return this.formData;
  }

  /**
 * Get the coverage of this calculation.
   */
  public getCoverage(): boolean {
    return this.coverage;
  }

  /**
 * Get the data for this calculation.
   */
  public getData(): F {
    return this.data;
  }

  /**
 * Get the result for this calculation.
   */
  public getResult(): T {
    return this.finalResult;
  }

  /**
   * Function to get the class name.
   */
  private getClassName(): string {
    return this.constructor.name;
  }

  /**
   * Function to get the version for this calculation.
   */
  private getVersion(): string { return this.version; }

  /**
   * Get all the Valuation Logs for this Calculation.
   */
  public async getValuationLogs(): Promise {
    // The array of results.
    const valuationLogs: CreateValuationLogDTO() = ();

    // Log the time the algorithm took to execute.
    valuationLogs.push({
      report: Calculation.formData,
      calculation: this.getClassName(),
      metric: 'Algorithm Execution Time',
      version: this.getVersion(),
      value:
        this.initialAlgorithmTime.getTime() - this.finalAlgorithmTime.getTime(),
    });

    // Log the time to fetch information.
    valuationLogs.push({
      report: Calculation.formData,
      calculation: this.getClassName(),
      metric: 'Data Fetch Load Time',
      version: this.getVersion(),
      value:
        this.initialDataFetchTime.getTime() - this.finalDataFetchTime.getTime(),
    });

    // Sample size is calculated and not an issue for this matter.

    // Return the metrics.
    return valuationLogs;
  }
}

And then, created subsequent classes for each calculation that extend the previous class, like:

export class GeneralArea extends Calculation {
  /**
   * Versioning information.
   * These variable hold the information about the progress done to this
   * calculation algorithm. The `version`  field is a SemVer field which
   * stores the version of the current algorithm implementation.
   *
   * IF YOU MAKE ANY MODIFICATION TO THIS CALCULATION, PLEASE UPDATE THE
   * VERSION ACCORDINGLY.
   */
  public version = '1.0.0';

  // Dependencies.
  constructor(private readonly dataSource: DataSource) {
    super();
  }

  // 1) Fetch Information
  public async fetchData(): Promise {
    // Query the DB.
    const dataPoints = this.dataSource.getInformation(/**  **/);

    // Return the data object.
    return {
      mortgages: dataPoints,
    };
  }

  // 2) Validate Coverage.
  public async coverageValidation(): Promise {
    // Load data.
    const data: GeneralAreaData = this.getData();

    // Validate to be more than 5 results.
    if (data.mortgages.length < 5) {
      return false;
    }

    // Everything correct.
    return true;
  }

  // 3) Algorithm
  public async algorithm(): Promise {
    // Load data.
    const data: GeneralAreaData = this.getData();

    // Perform operation.
    const result: number = await Math.min.apply(
      Math,
      data.mortgages.map(mortgage => mortgage.price),
    );

    // Return the result.
    return result;
  }
}

/**
 * Interface that holds the structure of the data
 * used for this implementation.
 */
export interface GeneralAreaData {
  // Mortages based on some criteria.
  mortages: SomeDataEntity;
}

The idea is to allow ourselves to carry out three basic operations:

  1. Load the data for each calculation.
  2. Validate coverage for each calculation.
  3. If the previous step returns a general "true", run calculations.

However, this pattern has posed some problems since FormData (the information that the user loads) is stored inactively, which means if some calculation is already running and another user loads, I can't configure FormData because it will make the other user's calculations go crazy. However passing the FormData For each function constructor it seems like a lot of work (if you think this should be the way, I'm not afraid to write code;))

Maybe it's this quarantine, however am I not seeing something here? Currently, the final run looks like this:


public performCalculation(formData: FormData): Promise {
  // Set general form data.
  Calculation.setFormData(formData); // <--- Error in subsequent requests :(

  // Instance Calculations.
  const generalAreaCalculation: GeneralAreaCalculation = new GeneralAreaCalculation(/** data service **/);
  // 85 more instantiations...

  // Load data for Calculations.
  try {
    await Promise.all((
      generalAreaCalculation.loadData(),
      // 85 more invocations...
    ));
  } catch(dataLoadError) { /** error handling **/ }

  // Check for coverage.
  const coverages: boolean() = await Promise.all((
    generalAreaCalculation.coverageCheck(),
    // 85 more coverage checks...
  ));

  // Reduce coverage.
  const covered: boolean = coverages.reduce((previousValue, coverage) => coverage && previousValue, true);

  // Check coverage.
  if (!covered) { /** Throw exception **/ }

  // Perform calculations!
  const result: FormDataWithCalculations = new FormDataWithCalculations(formData);

  try {
    result.generalAreaValue = generalAreaCalculation.calculation();
    // 85 more of this.
  } catch (algorithmsError) { /** error handling ***/ }

  /*
   (( Here should go the log collecting and storing, for each of the 85 calculations ))
  */

  // Return processed information.
  return result;
}

I'm not afraid to write too much code if that means it is reusable, maintainable, and most importantly capable of being testable (oh yes, test each calculation to make sure it does what it is supposed to do in normal cases and extremes is why classes were my focus, so each would have attached a test), however I am completely overwhelmed by writing this tremendous amount of code instead of just writing 85 functions (which is what was already used ) and call each one of them.

Is there a pattern? Guide? Advice? Reference? Study material? I can't seem to shrink this code anymore, but I wanted to ask in case someone knows a better pattern for this kind of problem and, in case it's useful, TypeScript code (NodeJS with NestJS API) to understand how everything works . be connected

Thanks in advance and apologies for my horrible English!

dg. differential geometry: do rigorous area / volume calculations using rigorous SIA

There are some intriguing "proofs" using Gentle Infinitesimal Analysis of Theorems about Areas and Volumes. Some examples:

  • Proof that $ sin & # 39; (0) = 1 $.
  • A proof that the surface of a cone is $ pi r sqrt {h ^ 2 + r ^ 2} + pi r ^ 2 $
  • A proof of the fundamental theorem of calculus.
  • A test of the arc length formula.

These "proofs" can be found in the book An infinitesimal analysis primer by JL Bell. To what extent can these tests be rigorous?

Using the surface area of ​​a cone example, we need some way to define the surface area before performing the calculation.

differential equations – parallel calculations

I have some problems with the use of memory and the speed of a parallel calculation. I have searched for the answers, but I have only found allusions to the existence of a solution, but nothing concrete.

The problem is that I have a differential equation with some parameters, and I want to calculate a single number of each solution and save that number in a table. (Also every time the boundary conditions change and I also need to make some logic because the differential equation has points where it is numerically problematic).

I currently have the table, T, which I initialize with some values ​​and use SetSharedVariable. Within ParallelDo I basically have something like CalcT(parameters) Y CalcT Configure the boundary conditions, have some logic to identify if the numerical problems will appear and correct them, solve the differential equation, store what I need from the solution in T and then erase the solution.

The memory taken by this laptop continues to grow as the iterations of the laptop increase ParallelDo take place (several orders of magnitude over T it grows).

I've seen many answers saying

do not use SetSharedVariable

but without really suggesting an alternative.

Note: this was not a problem before using ParallelDo

opengl – Should I rewrite the light calculations for all shaders?

I have been working to understand OpenGl and GLSL on how it works, there are many tutorials on shaders and buffers, but I still have many questions about how to apply shaders.

Imagine that I have a scene, where I have multiple lights and multiple objects. where each object has its own shaders. Is there any way to write only a fragment shader for the calculation of light and apply it to all objects?

mnemonic seed – BIP39 manual phrase calculations – How are multiple checksums valid?

You need help in understanding the mathematics regarding why multiple checksums work for the generation of mnemonic phrases (BIP39).

Assume a 12-word passphrase. If we divide the 2048 word list into groups of 16 … exactly 1 word of 16 words "block" will be a valid checksum for the 11 selected words.

With a 24-word passphrase, 1 word out of 256 would be a correct verification sum.

When generating a phrase by hand … I know that the ENT / 32 bits of the sha-256 hash are added to the entropy to generate the checksum word … but this generates a specific word.

So I guess my long question is … what is the math behind other checksum values ​​that are valid? I guess my real question is how is ENT + CS validated as legitimate?

See this example:

Entropy (128 bits): 11010011 01100100 00000010 01011110 01010011 11101100 01010011 01101110 01101010 01111000 11010010 11011000 10111010 00100011 11101100 11110010

SHA-256 Entropy hash = 14 c5 8b c9 05 11 5e 08 27 49 61 1e 48 d6 04 c0 2a 70 8c 39 ad 6c dc 0c 91 2f 70 62 c3 24 71 23

First 4 bits of SHA-256 Hash = 1 (hexadecimal) Binary = 0001

Recovery phrase generated: the square cactus nurse shares the rescue rescue prepares the bottom will suffer speed tomorrow

Another valid phrase (with the same entropy but different checksum): the square cactus nurse sharing the pond rescue preparing the bottom suffers speed will take into account

Count = 13 (in the BIPS39 word list) Subtract 1 since the word index starts at 0 = 12 12 to binary = 1100 Hex = C

C (hex)! = 1 (hex)

another valid phrase (with the same entropy but different checksum): square cactus nurse pond share rescue prepare background suffer speed will be acoustic

Acoustics = 17 (in the BIPS39 word list) Subtract 1 since the word index starts at 0 = 16 16 to Binary: 10000 Hex = 10

10 (hex)! = 1 (hex)

I guess I am also confused about how these checksums are valid when their values ​​are not equal to the first 4 bits of the ENT SHA-256 Hash. I guess it has to do with the way the checksum is validated (and pointing to my original question).

The easiest way to perform big data calculations

I have a large data set and I will need to perform some functions on it. This would take weeks to do on my laptop, so I was wondering, using cloud computing services, what is the easiest way to perform these tasks?

Health!

Geometry: Cosine sine functions seem discontinuous, which breaks my mathematical calculations: /

I'm sorry to bother you with this, as there may be a very obvious answer, but still: I just stumbled upon some old trigonometry stuff and realized that:

sin(0π) ≠ 0 Y
sin(π/2) ≠ 1 Y
sin(3π/2) ≠ -1

but the answer to all those sinus entries should be undefined Since all these numbers (0, 1, -1) are actually limits, we cannot have a relation of 0 since that would mean that there is an angle 0 that is not possible in the right triangle.

The same applies to 1 and -1 since there cannot be 2 right angles in a triangle :).

then, later:

the range breast

it is not (-1, 1) but rather: (-1, 0) and (0, 1)

the domain (sine entries) are:

everybody x in real numbers |

x% π! = 0 # <- otherwise, get a ratio of 0 that is impossible

x% π / 2! = 0 # <- otherwise, get a ratio of 1 that is impossible

x% 2π / 3%! = 0 # <- otherwise, get a ratio of -1 that is impossible

The chart should look like this, with the points excluded from the chart:

sine (x)


My question is: why my thinking is wrong?

Otherwise, all those fancy things like Euler's identity won't work for π, for example

e ^ (iπ) = cos (π) + isin (π)

it would mean that:

e ^ (iπ) = undefined + i * undefined

which makes no sense.

Are all those -1, 0, 1 values just a little helper crutches to keep us going so the math will "somehow" work?


  • the same applies to the cosine function, of course
    ** sorry for some awkward notations, I'm degenerated by programming

dnd 5e – Mounting size calculations

First of all: Triceratops is a huge, not medium beast (MM 80).

If it was actually Medium, then only Small and Small creatures could ride it. PHB 198:

A willing creature that is at least one size larger that you and that have a proper anatomy can serve as a mount

A rider and his mount are still considered different creatures and retain their statistics, including size.