sql – Normalize a table with duplicate rows

I have a table ‘user’ that contains users’ email id and password used for authentication of users while login. I have another table ‘operation’ that contains logged in user email and file path whose privilege needs to be modified. The same table also contains the user for which the privilege is being modified and by whom and the permission modified (eg. read access granted, write access denied) at what time.

Constraints:

  1. The same user can pass the same set of operations again another time.
  2. Two or more users can pass the same set of operations at same time.
  3. Email is the primary key in the user table.
  4. Email or time can’t be a primary key in the operation table, but the combination of both can be used as a primary key.(reason is 1 and 2)
 email  | who   | for    |path | permission      | time
--------+-------+--------+-----+-----------------+-----------------------
a@b.com | Admin | SYSTEM |     | Read is allowed | 2020-08-09 21:35:45:12
b@b.com | Admin | SYSTEM |     | Read is denied  | 2020-08-10 11:28:17:33
a@b.com | Admin | Admin  |     | Write is denied | 2020-08-10 11:28:17:33

How should I normalize the table up to 3NF?

I have tried some normalization, but I think these have issues. Please help me.

1st attempt:
enter image description here

2nd attempt:

enter image description here

3rd attempt:

enter image description here

mysql – Normalize SQL to the same format

I want to write a package that will manage mysql/pgsql views, but for that I need to check if the view defined in the code is the same as the view in the database which I get with show create table command.

The problem is that database parse query and might slightly change it with adding aliases, etc. Is there a parser that could normalize views to the same format so I could compare them like strings?

Right now I use approach with creating tmp view, getting the definition and dropping the view. It works, but looks not good.

What techniques can you use to make a grading app normalize?

The language to use?
The software to use?

mysql – Determine relational model and normalize relationships

Hello and thanks for looking at my question, I will try to make this as simple as possible.

I am developing a simple training application for my university classes using Flask and MySql (I don't know if it is important to consider this, but I will use SQL & # 39; staright & # 39 ;, there is no specialized toolkit for writing SQL).

I have a relational model made for functionality that I know I can finish at the end of the semester, but my problems with it are the following:

  1. I do not know if it is optimal to handle large volumes of tuples (I do not expect that many people are really using it, I just want to be sure that I have configured it to have a better idea of ​​best practices).
  2. It is not normalized to 3NF (or higher) normally. (I need this to deliver my database proposal)

The application will be a training application that allows the user to create an account and specify their height, age, weight and, most importantly, their previous experience working in 7 different categories. Those categories are Weapons, Chest, Back, Shoulders, Legs, Core and Cardio. The "experience level" of each category will be measured with four different levels: beginner, beginner, intermediate and advanced. Once logged in, the user will be on the main page where he can choose one of the training categories to generate a & # 39; session & # 39; of training, and will also choose the difficulty of that session (beginner, easy, medium, difficult). You can also choose the recommended training session at the bottom of the main page based on your previous training sessions and account details. Once you start a training session, the application will generate the & # 39; session & # 39; choosing workouts that fit the category / level of difficulty, and will organize them in a random order to complete with different time intervals (so as not to get bogged down) by the details, but the order in which the workouts are generated is important for the functionality of the application). From the main page you can also see your previous sessions.

I just need help to determine an optimal final schema / diagram for the relationships in my database and to make sure it is in a normal 3NF form.

Here is my approximate user interface design:
UI Part1:UI Part2:UI Part3:

Here is my current relational model:
Relational Model:

(As a side note, you may want to add calendar functionality to the application later, so some of the tables in my current relational model have time stamps and time_zone data is recorded)

Python – How to normalize an image data set

How can I normalize a set of image data? The only format I know and that I have seen using is dividing the matrix by 255, but I don't understand why, since this only changes the scale of values.
There are several measures of center tendency and dispersion, but I don't know how I could apply them to images.
Note: I am dealing with a classification problem.

javascript: What would be the best approach to normalize data for an LSTM model (using Tensorflow) with this wide range of values?

I am new to machine learning, so I am still trying to understand the concepts, keep this in mind if my question may not be as concise as necessary.

I am building a Tensorflow JS model with LSTM layers for time series prediction (RNN).

The data set used is applied every hundreds of milliseconds (at random intervals). However, the data produced can come in very wide ranges, e.g. Most of the data received will be of value 20, 40, 45, etc. However, sometimes this value will reach 75,000 at the end.

Therefore, the data range is from 1 to 75,000.

When I normalize this data using a standard min / max method to produce a value between 0-1, the normalized data for most data requests will be in many small and significant decimals. for example: & # 39; 0.0038939328722009236 & # 39;

So my questions are:

1) Is this minimum / maximum the best approach to normalize this type of data?

2) Will the RNN model work with so many significant decimals and precision?

3) Should I also normalize the output tag? (of which there will be 1 exit)

mysql – How to normalize a database that uses tables as references to other tables?

I am trying to create the database for my application, but I cannot normalize my data in a MySQL database.

  • I have a Types of the map entity

  • the Types must have one or more associated Models, in particular order

  • the Models have a Grid, is_prediction flag and a Origin associated with her

  • There may be more than one Model using the same Grid, Origin Y is_prediction condition, which differs only in the name of the model

  • Not all Origins provide all the Models

  • the Types can only have Models associated with him who have the same condition of [Grid, Origin, is_prediction]

I tried to create a table types_hierarchyusing grid_id, origin_id and is_prediction as an external key, but it seems incorrect, according to the answer in my other question here.

How can I create a standardized database for my needs?

This is what I tried to do:

diagram

probability or statistics: how can I normalize a set of data in relation to the data already provided above?

Some scientific libraries, such as Scikit-Learn for Python, can create standardization / standardization functions based on previously provided data. Looking at the documentation for Normalize[] Y Standardize[], I don't see any functionality to create a normalization or normalization function.

Is there any way I can use? Normalize[] or Standardize[] No need to load all previously used data, in an effort to make my program more efficient?

To clarify, I would like to provide data for these functions that are representative of the other data I will feed, and then use the result to create a function that normalizes / standardizes any new information that is sent. Thank you!

Database design: help to normalize a 0 or 1 relationship

I have the following situation:

  • I have a collection of documents.
  • All documents have a name.
  • The name of the document is not known at the time it is loaded, but will be discovered later (metadata is extracted from the documents).
  • It will be the case that several different documents may have the same name.

In one word:

  • A document can have 0 or 1 names, 0 indicating that the name is not yet known.

  • A name can have 1 or many documents

It is this last relationship that is giving me difficulties to model. If each document name were unique, you could simply model it like this:

Document:
Textfield Body

Name:
Char Value
fk document

But this will not work in my case because then you can only assign a name to a document, which means that the value of a name will not be unique. Perhaps I should say explicitly, there will be a value in the query "show me all documents that have a name". My current solution is to use a field of many to many, and then put a cardinality restriction in the logic of the application that says that documents can have a maximum of one name. I wonder if there is a better solution that can be implemented at the database level. I would like the database to be normalized to at least 3NF, and if possible up to 6NF. In any case, I don't want NULL.

I am using mysql, the application is a Python Django application.

columntore – Redshift: Performance wise Is it preferable to store delimited values ​​in a single column to normalize them in another table?

I have been looking to increase the performance of some tables that have been loaded into a redshift database. One of the columns in a table has many delimited values ​​that must be divided and consulted at runtime. The table has about 30 million rows and there would be an average of 10 delimited values ​​in this particular column.

Coming from an OLTP fund, my instinct is to normalize these values ​​to another table. However, I have seen several publications that suggest that this is not the way to go. Seeking advice