javascript – Angular pipeline to suppress sensitive information

I have created an angular channel to suppress confidential information such as credit cards, bank accounts, ABA numbers, etc.

This works well, but I would like to know if this is the best possible way to implement the logic.

Here is the code Typescript for the logic of the pipeline.

The SuppressInfoPipe export class implements PipeTransform {

transform (valueToSupress: string, unSuppressedCount?: number): string {


we leave deletedOutput = & # 39; & # 39 ;;

const valueToRemainUnsuppressed =
valueToSupress.substring (valueToSupress.length - unSuppressedCount, valueToSupress.length);

leave astariskLength = valueToSupress.length - aSuppressedCount;

for (be i = 0; i <astariskLength; i ++) {
suppressedOutput = suppressedOutput.concat (& # 39; * & # 39;);
}
suppressedOutput = suppressedOutput.concat (valueToRemainUnsuppressed);

return deletedOutput;
}

}

it takes the input of the string and the number of characters that will not be hidden and then returns the suppressed output.

Comments and suggestions are welcome.

Should I do a FPS game on fixed function Pipeline or OpenGL of programmable pipe?

I have a "FPS" game that I have programmed in Pipleline of fixed function and one made in OpenGL of programmable pipe. While the programmable tubing has many strange things that you can edit, it does not have the load identity that I need for the gun to be connected to the camera. There is little or no information on this subject and most of the information I can find is in the fixed function line instead of the programmable one. Keep in mind that with the fixed channel function, I can only use the glloadidentity function and attach it to then move on to something else. In the programmable, I do not know how to do this, so I spent a whole week looking for how to do it.

Should I use the fixed function pipe one and leave the programmable pipeline?

What should I do?

Thank you!

openGL resource utilization pipeline

How does the openGL representation pipeline work from the point of view of RAM / VRAM, CPU / GPU communication?

P.S.
I tried to search the web, but nothing concrete until now …

computer architecture: in the pipeline (at least, in MIPS), why is the address of the incremented program counter stored in the IF / ID pipe register?

In D. Paterson's book, Organization and computer design, fifth edition, there is a paragraph that says

Obtaining instructions: the upper part of Figure 4.36 shows the
instructions that are read from memory using the address on the PC and
then it is placed in the IF / ID pipeline record. The address of the PC is
increased by 4 and then rewrite on the PC to be ready for the
next clock cycle. This increased address is also saved in the IF / ID
pipe record in case it is needed later for an instruction, such
as beq
. The computer can not know what kind of instruction is being
recovered, so it must prepare for any instruction, potentially happening
necessary information by the pipeline.

I am trying to understand why an increased address should be saved in the IF / ID pipeline record, at least in MIPS. I understand that it might be necessary for some instruction later.

However, how does an instruction like beq use the value of the program counter?

architecture – design of data processing pipeline for data processing

I have a use case for which I need to build a data processing pipeline

  • The contact with the client takes the data that comes from different data sources such as CSV, database, api that should be the first mapped to the fields of a universal scheme. There could be ~ 100k rows every day that should be processed.
  • Then, some of the fields must be cleaned, validated and enriched. For example, the email field must be validated by calling a External API To verify if it is valid and does not bounce, the address field must be standardized to a particular format. There are other operations such as the estimation of the city, the state from the zip code, the validation of the telephone number. At least 20 operations planned, more to come in the future.
  • The above rules are not fixed and can change according to what the user wants to do with their data (saved from the user interface). For example, for a particular data, a user can only choose to standardize their phone number, but not verify if it is valid: therefore, the operations performed on the data are dynamic.

This is what I am currently doing:

  1. Load the data as a panda data frame (they have been considered sparkle.) But the data set is not that big[max 200 mb-]use spark). Have a list of the operations defined by the user that must be performed in each field as

    shares = {"phone number": [‘cleanse’, ‘standardise’], "zipper": [“enrich”, “validate”]}

As I mentioned earlier, the the actions are dynamic and they vary from the data source to the data source according to what the user chooses to do in each field. There are many custom businesses like this that can be applied specifically to a specific field.

  1. I have a custom function for each operation that the user can define for the fields.
    I call them according to the "actions" dictionary and I pass the data frame to the function; the function applies the logic written in the data frame and returns the modified data frame.
def cleanse_phone_no (df, configurations):
# Logic
back modified_df

I'm not sure if this is the right approach to do it. Things will get complicated when you have to call external APIs to enrich certain fields in the future. So I'm considering a producer-consumer model

to. Have a producer module that believes it is divided. each row in the file(1 contact record) as a single message in a queue like AMQ or Kafka

second. Have the logic to process the data in the consumers: they will take one message at a time and process them

do. The advantage I see with this approach is that it simplifies the data processing part, the data is processed one record at a time. There is more control and granularity. The disadvantage is that it will generate an overload in terms of computation as a record processed one by one, which I can overcome to some extent by using multiple consumers

Here are my questions:

  • What is your opinion about the approach? Do you have any suggestions for a better approach?
  • Is there a more elegant pattern that I can use to apply the custom rules to the data set I am currently using?
  • Is it advisable to use a producer-consumer model to process the data one row at a time as a complete data set (taking into account all the logical complexity that would come in the future)? If so, should I use AMQ or Kafka?