Log all users activity in SQL SERVER as history to trace users’ queries

I need to log the below attributes at a new table when any user or application contact and query anything in the SQL server.

  1. Remote Machine Name.
  2. Remote Machine IP.
  3. Server Name OR IP connected.
  4. Action Date.
  5. Type of Access.
  6. The username – used for login.
  7. DB Name connected.
  8. Queries executed.

performance – What can cause higher CPU time and duration for a given set of queries in trace(s) ran on two separate environments?

I’m troubleshooting a performance issue in a SQL Server DR environment for a customer. They are running queries that consistently take longer in their environment than our QA environment. After analyzing traces that were performed in both environments with the same parameters/filters and with the same version of SQL Server (2016 SP2) and the exact same database, we observed that both environment were picking the same execution plan(s) for the queries in question, and the number of reads/writes were close in both environments, however the total duration of the process in question and the CPU time logged in the trace were significantly higher in the customer environment. Duration of all processes in our QA environment was around 18 seconds, the customer was over 80 seconds, our CPU time was close to 10 seconds, theirs was also over 80 seconds. Also worth mentioning, both environments are currently configured to MAXDOP 1.

The customer has less memory (~100GB vs 120GB), and slower disks (10k HHD vs SSD) than our QA environment, but but more CPUs. Both environments are dedicated to this activity and should have little/no external load that wouldn’t match. I don’t have all the details on CPU architecture they are using, waiting for some of that information now. The customer has confirmed they have excluded SQL Server and the data/log files from their virus scanning. Obviously there could be a ton of issues in the hardware configuration.

I’m currently waiting to see a recent snapshot of their wait stats and system DMVs, the data we originally received, didn’t appear to have any major CPU, memory or Disk latency pressure. I recently asked them to check to see if the windows power setting was in performance or balanced mode, however I’m not certain that would have the impact we’re seeing or not if the CPUs were being throttled.

My question is, what factors can affect CPU time and ultimately total duration? Is CPU time, as shown in a sql trace, based primarily on the speed of the processors or are their other factors I should be taking in to consideration. The fact that both are generating the same query plans and all other things being as close as possible to equal, makes me think it’s related to the hardware SQL is installed on.

microservices – Where to place an in-memory cache to handle repetitive bursts of database queries from several downstream sources, all within a few milliseconds span

I’m working on a Java service that runs on Google Cloud Platform and utilizes a MySQL database via Cloud SQL. The database stores simple relationships between users, accounts they belong to, and groupings of accounts. Being an “accounts” service, naturally there are many downstreams. And downstream service A may for example hit several other upstream services B, C, D, which in turn might call other services E and F, but because so much is tied to accounts (checking permissions, getting user preferences, sending emails), every service from A to F end up hitting my service with identical, repetitive calls. So in other words, a single call to some endpoint might result in 10 queries to get a user’s accounts, even though obviously that information doesn’t change over a few milliseconds.

So where is it it appropriate to place a cache?

  1. Should downstream service owners be responsible for implementing a cache? I don’t think so, because why should they know about my service’s data, like what can be cached and for how long.

  2. Should I put an in-memory cache in my service, like Google’s Common CacheLoader, in front of my DAO? But, does this really provide anything over MySQL’s caching? (Admittedly I don’t know anything about how databases cache, but I’m sure that they do.)

  3. Should I put an in-memory cache in the Java client? We use gRPC so we have generated clients that all those services A, B, C, D, E, F use already. Putting a cache in the client means they can skip making outgoing calls but only if the service has made this call before and the data can have a long-enough TTL to be useful, e.g. an account’s group is permanent. So, yea, that’s not helping at all with the “bursts,” not to mention the caches living in different zone instances. (I haven’t customized a generated gRPC client yet, but I assume there’s a way.)

I’m leaning toward #2 but my understanding of databases is weak, and I don’t know how to collect the data I need to justify the effort. I feel like what I need to know is: How often do “bursts” of identical queries occur, how are these bursts processed by MySQL (esp. given caching), and what’s the bottom-line effect on downstream performance as a result, if any at all?

I feel experience may answer this question better than finding those metrics myself.

Asking myself, “Why do I want to do this, given no evidence of any bottleneck?” Well, (1) it just seems wrong that there’s so many duplicate queries, (2) it adds a lot of noise in our logs, and (3) I don’t want to wait until we scale to find out that it’s a deep issue.

database – Extracting user field values from dynamic SQL queries


I have successfully written a fairly long dynamic sql query, however am struggling with a seemingly simple part at the end.

Although, I am able to successfully extract mail and name from the users table, when I try to extract field_first_name it returns the error below.

The users table has a column with the machine name: field_first_name


    $database = Drupal::service('database');

    $select = $database->select('flagging', 'f');
    $select->fields('f', array('uid', 'entity_id'));
    $select->leftJoin('node__field_start_datetime', 'nfds', 'nfds.entity_id = f.entity_id');
    $select->fields('nfds', array('field_start_datetime_value'));
    $select->leftJoin('node_field_data', 'nfd', 'nfd.nid = f.entity_id');
    $select->fields('nfd', array('title'));
    $select->leftJoin('users_field_data', 'ufd', 'ufd.uid = f.uid');
    // TODO extract first name
    $select->fields('ufd', ('mail', 'name', 'field_first_name'));

    $executed = $select->execute();
    $results = $executed->fetchAll(PDO::FETCH_ASSOC);

    $username = $result('name');
    $email = $result('mail');
    $first_name = $result('field_first_name');


DrupalCoreDatabaseDatabaseExceptionWrapper: SQLSTATE(42S22): Column not found: 1054 Unknown column 'ufd.field_first_name' in 'field list': SELECT f.uid AS uid, f.entity_id AS entity_id, nfds.field_start_datetime_value AS field_start_datetime_value, nfd.title AS title, ufd.mail AS mail, ufd.name AS name, ufd.field_first_name AS field_first_name FROM {flagging} f LEFT OUTER JOIN {node__field_start_datetime} nfds ON nfds.entity_id = f.entity_id LEFT OUTER JOIN {node_field_data} nfd ON nfd.nid = f.entity_id LEFT OUTER JOIN {users_field_data} ufd ON ufd.uid = f.uid; Array ( ) in event_notification_cron() (line 63 of /app/modules/custom/event_notification/event_notification.module).

postgresql – How I can find the most resource-intensive queries that has ever run in my database and use a specific table?

In my database I want to check whether my idea on an improvement upon table schema works. Therefore I want so look which are the heaviest select queries ever run into our database and use a specific table that I want to do the changes.

My idea is to check the speed and database load of our heaviest queries in my database, then do the schema changes (or possible query refactoring as well) and rerun them so I can benchmark my idea with real data.

Hence I can have a proof that my changes actually improve search speed.

So how I can search which queries upon a postgresql RDS instance even run and how heavy they are?

sql server – How should I structure my SQL database to make search Queries fast

  • My computer has 64 cores
  • Microsoft SQL Server Data Tools 16.0.62007.23150 installed

One initial question: Which SQL version would be best for 64 cores?

I am new to SQL databases and have understood that it is important how you structure the database so it will go faster later to search and extract the information needed(Queries).

I beleive I have also understood that using DataTypes that takes up less memory is good for speed later on also, like using a smallint instead of an int if it will work with a smallint etc.

I like to ask if my structure that I am thinking of is well designed in order to extract information later or if I should do this a bit different. The database will add stock symbol data and as I notice this database will be extremely big which is the purpose of this question.

This is the whole structure that I have in mind (Example comes after explanation):

  1. I will use 4 columns. (DateTime|Symbol|FeatureNr|Value)
  2. DateTime has format down to the minute: 201012051545
  3. Symbol and FeatureNr has smallint. For example: MSFT = 1, IBM = 2, AAPL = 3. So as you see. Instead of using strings in the columns, I have put smallint that represent those symbols/featureNr. This so search Queries goes faster later.
  4. The database will for example have 50 symbols where each symbol has 5000 features.
  5. The database will have 15 years of data.

Now I have a few big questions here:
If we just filling this database with data for 1 symbol. It will be this many rows in the database:

1440 minutes(1 day) * 365 days * 15 years * 5000 features = 39,420,000,000

Question 1:
39,420,000,000 rows in a database seems like alot or is this no problem?

Question 2:
The above was just for 1 symbol. Now I had 50 symbols which would mean:
39,420,000,000 * 50 = 1,971,000,000,000 rows. I don’t know what to say about this. Is this to many rows or is it okay? Should I have 1 database per symbol for example and not all 50 symbols in one database?

Question 3:
Not looking at how many rows it is in the database. Do you think the database is well structured for fast search queries. What I ALWAYS will search for everytime is this(This will later return 5000 lines(features). Notice that I search for one symbol ONLY and a specific datetime.
I will always do this exact search, and Never any other type of search, if you have any idéa how I should best structure the database with those 50 stock symbols.

As in Question 2. Should I have one database per symbol. Will this result in faster searches for example?
(symbol = 2, smalldatetime = 1546) where I want to return the featureNr and value which would be the below lines: (I will ALWAYS ONLY do this exact search)

201012051546 | 2 | 1 | 76.123456789
201012051546 | 2 | 2 | 76.123456789
201012051546 | 2 | 3 | 76.123456789

Question 4:
Wouldn’t it be the most optimal to have 1 table for each symbol and datetime
With other words: 1 table for symbol = 2 and smalldatetime 1546 which holds 5000 rows of features and then do this for each symbol and datetime?
This will result in so many tables per symbol. Or is this not good in any other way? Notice here that I will need to in a loop later retreive all features(5000) from all those tables(7,884,000 tables) which is very important that it goes as fast as possible. I know it might be difficult to know but how long time approx: could a process like this with my structure take with a 64 core computer?

1440 minutes(1 day) * 365 days * 15 years = 7,884,000 tables

My idéa to the database/table structure:
smalldatetime | symbol (smallint) | featureNr (smallint) | value (float(53))

201012051545 | 1 | 1 | 65.123456789
201012051546 | 1 | 1 | 66.123456789
201012051547 | 1 | 1 | 67.123456789
201012051545 | 1 | 2 | 65.123456789
201012051546 | 1 | 2 | 66.123456789
201012051547 | 1 | 2 | 67.123456789
201012051545 | 1 | 3 | 65.123456789
201012051546 | 1 | 3 | 66.123456789
201012051547 | 1 | 3 | 67.123456789

201012051545 | 2 | 1 | 75.123456789
201012051546 | 2 | 1 | 76.123456789
201012051547 | 2 | 1 | 77.123456789
201012051545 | 2 | 2 | 75.123456789
201012051546 | 2 | 2 | 76.123456789
201012051547 | 2 | 2 | 77.123456789
201012051545 | 2 | 3 | 75.123456789
201012051546 | 2 | 3 | 76.123456789
201012051547 | 2 | 3 | 77.123456789

201012051545 | 3 | 1 | 85.123456789
201012051546 | 3 | 1 | 86.123456789
201012051547 | 3 | 1 | 87.123456789
201012051545 | 3 | 2 | 85.123456789
201012051546 | 3 | 2 | 86.123456789
201012051547 | 3 | 2 | 87.123456789
201012051545 | 3 | 3 | 85.123456789
201012051546 | 3 | 3 | 86.123456789
201012051547 | 3 | 3 | 87.123456789

php – Attaching category queries to search field

I’m trying to filter the loop based on some category buttons, so when someone clicks the button the search results are filtered. However, I can’t seem to pass the category to the search results.

Here is the query function in functions.php:

function poet_category_button_query( $category ) {
    global $category_query;
    return $category_query = get_posts( array( 'category_name' => $category ) );

Here is the PHP for the search & category buttons:

<form role='search' method='get' class='search-form' action='<?php echo home_url( '/' ); ?>'>
    <div class='search-wrapper'>
            <span class='screen-reader-text'><?php echo _x( 'Search for:', 'label' ) ?></span>
            <input type='search' class='search-field'
                placeholder='<?php echo esc_attr_x( 'Search...', 'placeholder' ) ?>'
                value='<?php echo get_search_query() ?>' name='s'
                title='<?php echo esc_attr_x( 'Search for:', 'label' ) ?>' />
        <input type='submit' class='search-submit'
            value='<?php echo esc_attr_x( 'Search', 'submit button' ) ?>' />
    <h2 class='categories-header'>Categories</h2>
    <div class='categories-wrapper'>
            $categories = get_categories( array(
                'orderby' => 'name',
                'order' => 'ASC',
                'hide_empty' => false
                ) );
            foreach( $categories as $category ) {
                    esc_url( get_category_link( $category->term_id ) ),
                    esc_html( $category->name ),
                    poet_category_button_query( $category->name )

collision detection – Optimizing a quadtree for circles and circular queries

I can’t recommend a structure that actually implements what you’re asking for, but it definitely won’t work like a quad tree. It may not be a tree at all and it might not even exist…

A quad tree has a tree structure because each node represents the node above it divided into four quadrants. At any level these four quadrants cover all of the space covered by their parent. Every single point in the 2D can be inserted at exactly one leaf node somewhere on the tree.

If you try to divide space into circles, you’re not going to be able to find a set of circles that cover their parent circle completely and evenly. This geometric problem rules out the possibility that you can store these circles in a tree that will be useful to traverse for collision detection. There will always be points that are either not covered by a circle or covered by more than one circle. (That’s bad because it means a point in 2D space either has nowhere to go in the tree or it has more than one place to go!)

Testing whether two circles intersect is simple, but I am not aware of anything simpler than just checking if the distance between them is less than the sum of their radii.

java – queries regarding taking inputs in compititive programming

I am a beginner in competitive programming and sometimes in solving questions I face problems in reading inputs like for example in a question, the first line consists of two separated integer N and K where N denotes the number of camps & K is a threshold value, then the next N lines consist of two separated integer vi and ei, then for each camp I the next ei lines consist of three separated integers.
where can I learn to take inputs in this format, sometimes I couldn’t solve the problem because of not being able to read inputs appropriately since most websites make you implement the function only.
Any suggestions on this, how can I do get better (PS: I prefer JAVA language for programming)??

magento2 – Magento 2.3.4 – Server temp folder gets full because of SQL queries

I am using Magento 2.3.4, and the server temp folder gets full because of SQL queries. I am looking for someone who can optimize SQL server and queries from Magento 2. The below error appears when the site crashes.


SQLSTATE(HY000): General error: 1021 Disk full (/dev/shm/#sql_2e1b28_7.MAI); waiting for someone to free some space… (errno: 28 “No space left on device”), query was: SELECT main_table.* FROM eav_attribute AS main_table
INNER JOIN eav_entity_type AS entity_type ON main_table.entity_type_id = entity_type.entity_type_id
LEFT JOIN eav_entity_attribute ON main_table.attribute_id = eav_entity_attribute.attribute_id
INNER JOIN catalog_eav_attribute AS additional_table ON main_table.attribute_id = additional_table.attribute_id WHERE (entity_type_code = ‘catalog_product’) AND ((additional_table.is_used_in_grid = 1)) GROUP BY main_table.attribute_id


Can someone help me