database: how do I check something when I don't know how it is structured?

How are queries written and executed in a database / warehouse if they are not aware of how the data is structured?

By "structure" I mean names of tables, fields, etc. Unless I have table names, how can I check something?

clearly, I don't know much about databases, but this part confuses me especially.

sql server – INSERT .csv values ​​in the database

We have a sybase database that runs an application used by our users that contains customer information.

The situation is that the address information of approximately 1000 customers is outdated. We were given a .csv file with updated address information, and we are asked to update the addresses in the database.

I have filtered the csv to compare the 2 address columns and only show the ones that are not equal. So I have a .csv that contains only the clients that need to update their address.

So imagine that this customer information lives at a table. We also have unique customer IDs to accompany along with other personal information found in the customer table.

I am not an expert in SQL, so my query syntax for sql is probably far away, but does anyone have any suggestions on how I could write a query for this?

Something like,

ENTRY FOR EACH [customer_id in .csv column A] [info_address in .csv column B] TO clientAddress.table

This is how I imagine it would work in my head anyway. I tried to search but I can't find anyone with a situation similar enough to mine.

You cannot update a record with a trigger using a dynamic column in a postgreSQL database

I am trying to get a column called XXX_LASTCHANGE update all the time a record is updated.
As XXX suggests, this is a dynamic value that I want to pass with the activation initiator instead of creating dozens of similar activators.

The following occurred to me, but I cannot make this work because an error message is generated that says the error stack is full.

CREATE OR REPLACE FUNCTION update_lastchange()   
RETURNS TRIGGER AS $$
BEGIN       

    EXECUTE format('
        UPDATE "%s" t
        SET    "%s_LASTCHANGE" = NOW()
        WHERE  ($1)."%s_ID" = ($2)."%s_ID"', 
        TG_TABLE_NAME, TG_ARGV(0), TG_ARGV(0), TG_ARGV(0))
    USING NEW, OLD;
    RETURN NEW;
END;
$$ language 'plpgsql';

CREATE TRIGGER cnt_lastchange BEFORE UPDATE ON "CONTENT" FOR EACH ROW EXECUTE PROCEDURE update_lastchange('CNT');

I am not sure what I am missing. Any clue is really appreciated.

Thank you.

How can I read / write web forms in DB7 from an external database?

I have an assembly and production environment that contains my drupal instance. When I launch changes from staging to production, I encounter the problem of losing shipments that come from the production environment, since the form uses the drupal database.

I am trying to find a way to read and write shipments from another database that prevents me from losing shipping data. I could not find an easy-to-read information from another database considering that the web form is based on the node table in drupal.

Does anyone have a suggestion for this problem? Apart from using webhooks, they don't help with reading. The most important part of this problem is reading of writing DB.

50 CRORE DATABASE With name, number, email, location and many more for $ 15

50 CRORE DATABASE With name, number, email, location and many more

This is a new database.

Screenshots of 20 million data

This is a complete set of more than 6 GB of data. It consists of all the data in 7 zip files to download. You can not sell loose

There is no demonstration, since it is not possible to share numbers with all interested people. However, I can guarantee authenticity.

After making the payment, in 2 minutes you will receive an email from CLOUDG5 with the link to download the data.

I will give you 3 months to download this information.

You will also receive a gift, many useful programs to market and increase your business.

.

You need a suggestion on the minor update procedure of the PostgreSQL database from 9.5.15 to 9.5.19:

Thank you very much to each and every one.

Could you help us to the following points:

We are trying to implement the latest minor version of Postgres 9.5.19.

We have implemented the postgres server using the yum repository.

Existing version: 9.5.15
Operating system platform: RHEL 7.2

Target Verion: 9.5.19

What are the prerequisites?
How to implement the latest minor version.
What are the post-implementation activities?
What are the reversal activities?

Share some best practices for this activity.

azure sql database – Is a clustered index analysis bad for performance?

I have a complex query (see below) that works poorly in a large database.
Now I am analyzing the query eliminating the unions, I look at the real plan and step by step I add the unions again.
I see a clustered index analysis with high I / O costs and a 100% CPU cost for a species.
Can this query be more optimized?

WITH currentPeriod AS
    (SELECT ROW_NUMBER() OVER (PARTITION BY tp.(StartDate)
      ORDER BY m.(Created) DESC) AS rowIndex
            , m.(Created), m.(MessageId)
            , tp.(StartDate), tp.(EndDate), tp.(IsCorrection)
       FROM (TimePeriods) tp
       JOIN (Messages) m on m.Id = tp.(MessageId)
     )
    SELECT (Created), (StartDate), (EndDate), (IsCorrection)
      FROM CurrentPeriod cur
     WHERE rowIndex = 1
     OPTION (RECOMPILE)

The plan (using ApexSQL):
Actual Execution Plan

I already changed the PK index in the Clustered message table to Non-clustered and added a Clustered index in Created.

This is the original & # 39; plan & # 39; without the index grouped in Created:
Original plan

Background (can be omitted)
I'm not a DBA, I'm just the type that knows most about databases, which doesn't mean I know much about databases.
Tables are filled using a custom load application. Uploaded files are XML files. Each file is a message and a message can have multiple periods of time. The message may also have corrected time periods from previous messages. I am only interested in the last updated period. That's why I use ROW_NUMBER OVER and PARTITION BY. The data resulting from the query are correct. That has been verified.

The complete query:

WITH currentPeriod AS
    (SELECT ROW_NUMBER() OVER (PARTITION BY ce.(ReferenceCode), tp.(StartDate), e.(Id), ep.(ContractNumber)
        ORDER BY m.(Created) DESC, ep.(ContractNumber) DESC, IIF(fp.(LbTab) = N'010', 1, 0), fp.(DatAanv) DESC) AS rowIndex
        , ce.(ReferenceCode), ce.(Name) as (EntityName)
        , e.(SocialSecurityNumber), e.(EmployeeNumber), e.(Initials), e.(Firstname), e.(Prefix), e.(Surname), e.(BirthDate)
        , CAST((DATEDIFF(DAY, e.(BirthDate), ep.(HireDate)) / 365.24) as FLOAT) (AgeHired), e.(Gender), e.(PhoneNumber), e.(PhoneNumber2), e.(Email)
        , TRIM(CONCAT_WS(' ', a.(Street), COALESCE(a.(Number), COALESCE(a.(NumberString), '')), COALESCE(a.(NumberExtension), ''))) (StreetLine)
        , a.(ZipCode), a.(City), a.(CountryCode)
        , ep.(HireDate), ep.(DepartureDate)
        , f.(AantVerlU), f.(LnSv), f.(AantSV)
        , fp.(IndAvrLkvOudrWn), fp.(IndAvrLkvAgWn), fp.(IndAvrLkvDgBafSb), fp.(IndAvrLkvHpAgWn), fp.(DatAanv)
        , fp.(LbTab), fp.(IndWAO), fp.(IndWW), fp.(IndZW)
        , tp.(StartDate), tp.(EndDate), tp.(IsCorrection)
        FROM (TimePeriods) tp
        JOIN (Messages) m on m.Id = tp.(MessageId)
        JOIN (CorporateEntities) ce on m.(CorporateEntityId) = ce.Id
        JOIN (Financials) f on f.(TimePeriodId) = tp.Id
        JOIN (FinancialPeriods) fp on fp.(FinancialId) = f.Id
        JOIN (EmployementPeriods) ep on ep.Id = f.(EmployementPeriodId)
        JOIN (Employees) e on e.Id = ep.(EmployeeId)  
        JOIN (Addresses) a on a.Id = e.Id
        WHERE NOT EXISTS (SELECT 1 
                            FROM (dbo).(Withdrawals) w 
                            JOIN (dbo).(TimePeriods) tp2 ON tp2.Id = w.(TimePeriodId) 
                            JOIN (dbo).(Messages) m2 ON m2.id = tp2.MessageId AND m2.(CorporateEntityId) = m.(CorporateEntityId)
                            WHERE w.SofiNr is not null AND w.SofiNr = e.(SocialSecurityNumber) AND tp2.StartDate = tp.(StartDate) 
                            AND w.(NumIv) = ep.(ContractNumber) AND m2.(Created) > m.(Created) AND CONVERT(date, m.(Created)) <= CONVERT(date,@messageDate)) 
        AND CONVERT(date, tp.StartDate) >= CONVERT(date, @date26Param) 
        AND CONVERT(date, tp.EndDate) <= CONVERT(date, @endDate) AND CONVERT(date, m.(Created)) <= CONVERT(date, @messageDate)) 

        SELECT *
        FROM CurrentPeriod cur
       WHERE rowIndex = 1 AND (AantVerlU) > 0 AND (LbTab) != N'010'
         AND (SELECT count(1) 
                FROM CurrentPeriod cur2 
                WHERE cur2.(StartDate) > DATEADD(week, -26, cur.(StartDate)) AND cur2.(StartDate) < cur.(StartDate)
                    AND COALESCE(cur2.(SocialSecurityNumber),  LEFT(cur2.(ReferenceCode), 9) + '-' + cur2.(EmployeeNumber)) = COALESCE(cur.(SocialSecurityNumber),  LEFT(cur.(ReferenceCode), 9) + '-' + cur.Employeenumber)
                    AND cur2.(AantVerlU) > 0 AND cur2.(LbTab) != N'010') = 0
         AND StartDate >= @startDate 
    ORDER BY (SocialSecurityNumber), (StartDate);

mysql 5.6 – Difference between Aurora's global database and Aurora's regional database with multiple read replicas?

I am planning to migrate my 300GB RDS PostgreSql database to Aurora MySql using SCT and DMS. RDS Postgresql is in seven regions in the current configuration. Data pipelines are used to ingest data in these cases and keep them synchronized. I was thinking that once I create a global database in one region, I will be able to add secondary instances in six other regions. But I read that global instances only support an additional secondary region.

The only relevant benefits for the global database of documents are:

  • Having an additional secondary instance will have a faster replication
    compared to having a read replica.
  • Faster disaster recovery such as
    The secondary instance can be promoted to elementary school in less than a minute.

Now I wonder what is the difference between:

  1. Have a global Aurora database in a region with writer and reader, add a secondary region, add five reading replicas (primary or secondary)

  2. Have a regional Aurora database (with a writer and reader) and add six reading replicas.

sql server – Availability group notification when adding a new database

When a new database is added to the SQL Server AG configuration, a notification email is sent to us as follows.

The database of the availability group "XXXX" is changing the roles from "SECONDARY" to "SECONDARY" because the duplication session or the availability group failed due to role synchronization. This is just an informational message.

What is confusing is the phrase.

"Change roles from" SECONDARY "to" SECONDARY "…"

  1. Can anyone decode the meaning of this?
  2. Is it normal for such a notification to be sent when new databases are added or am I doing it wrong or any other problem?

–In & # 39; thoughts & # 39; …

database: insert data simultaneously, while the new row must be calculated based on the last row inserted

I am working on a system that awards users points based on the amount they spent on the transaction

Each time the client makes a transaction, a message is sent to sqs and activates a lambda (.net core, Ef core, postgresql) that verifies if the points are present in the points table for that customer, if so, calculates a new point based on last points received and insert another row because we need to keep the history of points. However, when the user performs several transactions simultaneously, the points are not calculated correctly.

Points table

User ID points before points after transaction ID points received

1 0 10 10 1 -> first transaction
1 10 20 10 2 -> second transaction
1 20 30 10 3
1 20 30 10 4

Transactions ID 3 and 4 -> occurred simultaneously.

I tried several levels of isolation, such as compromised and serializable reading. But it does not work.

Can anyone help me solve this problem?