sql server: partition of the appropriate table for the large application log table (insert a lot)

I have a record table that receives records of several applications, several inserts per second, that approximate ~ 1M of rows per day. The table has a dozen columns (mostly nvarchar), for example. gravity, name of the application and message, and it also has a clustered cluster ID and some computed columns. A "real" column is JSON data, and a couple of the calculated columns use the JSON functions to extract pieces of this. The other calculated column consists of a "prefix" of the timestamp column (YYYYMMDDHH). It seemed to me that an index in a datetimeoffset (7) would be too granular. All these are persisted. The only indexes are the ID, the time stamp prefix and the name of the application, which I think would exist in almost all queries (3 separate indexes).

The query was terribly slow until I had the calculated columns persist (not surprisingly), which seemed to help, but still very slow, even with the indices that are used. Enlargement has not helped much. It is also sporadic (sometimes fast, usually slow), which made me think that it was due to a heavy and sporadic load of inserts. I spent a long time (days) trying to figure out what was slow, but I really could not identify anything. Now I am investigating if the partition would help. It could certainly help with aging / deleting old data.

My initial idea was to try to expand things so that queries can reach multiple sources in parallel and reduce critical points, but from what I have read, it is the opposite of what I should do (group logical data to reduce the number of partitions used). I hope that all queries use the timestamp column or the date prefix (based on the timestamp) to limit their results, and that would help eliminate old records, so the partition based on that seems to make sense. Other good candidates seem to be the name of the application and possibly the severity, which would help to group the things that will be consulted together.

In addition to deleting old records, would the partition help improve the query time in this scenario and, if so, does my reasoning make sense for the options in the partition column? I have read that it can and can not, so I am not sure if I should follow this route …

np complete – Sudoku Puzzles in O log n time although inefficient

Such an algorithm can Solve the Sudoku, but it will be very slow in practice.

By definition, Sudoku is in a 9×9 grid, so there is only a fixed (finite) number of possible puzzles. Your algorithm codes them all. Therefore, its algorithm takes constant time, that is, $ O (1) $ hour.

What this tells you is that asymptotic analysis is not a useful tool to analyze the execution time of a Sudoku solver, since the asymptotic execution time focuses on how the execution time increases as the size of the the entrance. With Sudoku, you can not increase the size of the entry; The size of the entrance is fixed. Therefore, it does not even make sense to talk about the asymptotic runtime to solve the Sudoku riddles.

Some people analyze a generalization of Sudoku, where instead of a 9×9 grid, we have a $ k ^ 2 times k ^ 2 $ grid, where $ k $ can be any integer Then one can consider the asymptotic execution time. However, its algorithm can not be applied to this generalization of Sudoku, since there are infinite puzzles for this generalization, so it can not code them all in the code of its algorithm; each algorithm must be of finite length. Therefore, in that context, while asymptotic analysis is applied, its algorithmic approach no longer works.

np complete – Will this algorithm solve all the Sudoku puzzles in O log n time?

Suppose I have a hypothetical fixed list of all possible Sudoku general puzzles. Then I create the pseudocode to demonstrate the algorithm.

  • The algorithm performs a search to compare the element indices.
    in the puzzle that does not appear as zero.

  • In other words, all the possible Sudoku listed.
    the grids are compared until a solution is found that matches the solution of the elements with the
    same index

tuple =[.......]

tuple =[.......]

all possible sudoku grids.

Here is our solver.

solver = input (enter the puzzle like this 10000030004000 ...)

If the element with the same resolution index == tuple1, tuple2, ...

printing (solution that contains exact elements that share the indexes in
both the solution and the puzzle)

Saw: the need to log in to a different file, the unified logger does not seem appropriate, ASL does not work

I'm developing an application that needs to login, and the unified registrar does not seem to fit my needs. Ideally, I would like the records to be placed in a specific file readable only by the root, because there is confidential data. I know that the unified registrar can put strings as private unless it is configured differently, but I can not censor records retroactively, which makes debugging past / recent problems impossible. The only way to debug would be to first establish the private messages as globally visible (for example, for all applications!) And then detect the problems on the spot. Getting users to do this when they have problems would be difficult. And if I do all public records all the time, that has huge security implications. So my requirements are: readable only by root at all times, and readable retroactively.

I have tried using ASL to achieve this because it fits perfectly to my needs, but my rules in my conf file are being ignored. The records are still displayed in the unified registrar.

I wrote a simple program to try to achieve my goal:

#include 

int main (void)
{
asl_object_t m = asl_new (ASL_TYPE_MSG);
asl_set (m, ASL_KEY_SENDER, "hello");
asl_set (m, ASL_KEY_FACILITY, "hello");
asl_log (NULL, m, ASL_LEVEL_WARNING, "HelloMessage asl!");
asl_free (m);

returns 0;
}

I have tried all kinds of queries / rules / actions in my / etc / asl / hello conf file:

  1. ? [T hello] I just claim
    * dir_empresa / var / log / hello ttl = 30
    
  2. (with and without extern)

> /var/log/hello.log extern mode = 0750
? [= Facility hello] Claim
? [= Facility hello] File /var/log/hello.log fmt = raw rotate = local compress ttl = 3

  1. ? [= Facility hello] Claim
    ? [= Facility hello] hello.log file fmt = raw rotate = local compress ttl = 3
    

I tell syslogd to reload the settings by running sudo pkill -HUP syslogd after modifying the configuration file.

I have three questions:

  1. Can the unified registrar meet my needs? There is something that I am not
    seeing?
  2. Did I just make a mistake in my ASL code that I could correct?
    is?
  3. The ASL is not only in disuse, it is partially paralyzed, so my
    Are not configs used? This is not possible?

I need to use a system logger of some kind to take advantage of the integrated file rotation. I also need a solution that works for Mojave, High Sierra and Sierra.

Thank you!

Teamviewer event log – anything suspicious?

Today I was using Teamviewer and a colleague connected with me to show me something.
Before we finished the connection, he started transferring files through Teamviewer.

So, I'm suspicious now and I want to check if it transferred a malicious file to my computer.

Here is the Teamviewer event log since the file transfer began:

https://pastebin.com/dsaHqTZ2

javascript: to write a unit test case to verify if localstorage is empty once you log out of the web application

I am trying to delete my local storage when I disconnect from the application. I want to write a unit test case in Jasmine to verify if this task is performed when the logoff function is executed. I'm writing test cases for the first time, so I got stuck in the focus.

In my compoment.ts file I have a logoff function:

Sign off() {
location.href = "/";
localstorage.clear ();
}

spec.ts file

beforeEach (function () {
var store = {};
spyOn (localStorage, & # 39; getItem & # 39;). andCallFake (function (key) {
zero return
});
});

I do not know if this is a correct approach to write the test case for this particular requirement or which of them is really valid for this situation.

nt.number theory – Order of magnitude of $ sum frac {1} { log ^ 2 {p}} $, or $ sum frac {1} { log ^ a {p}} $ to be arbitrary $ a $

In this MO question, he says we have

$$ sum_ {p <n} frac {1} { log {p}} = frac {n} { log ^ 2 n} + O left ( frac {n log log n} { log ^ 3 n} right). $$

where the sum is in all the prime numbers $ p $, up to a certain max prime $ n $. This is derived from the prime number theorem.

My question is, is there a similar result for the order of magnitude of the sum of the logarithms of prime numbers, squared?

$$ sum_ {p <n} frac {1} { log ^ 2 {p}} =? $$

Or, when it rises to an arbitrary power, let's say $ a $?

$$ sum_ {p <n} frac {1} { log ^ a {p}} =? $$

I would love to get an analogous result for any of these.

c # – Use of Nlog to create log file. After creating boot packages and using it to install on my file registration machine it is not generated

My NLog.config file:




  
    
    
  

  
    
    
  

NOTE 1: Ja tests variations of this file and there are no results.

NOTE 2: Running the application by the visual studio the registry works properly.

My configuration is

Copying Action: Content

Copy to directory: Copy always

sql server – Why is my log file so massive? 22gb I'm running registry backups

Taking this backup will only back up the data and erase the record. The actual size of the record should be reduced by a DBCC command if you really need to reduce the registry. Depending on how often you back up your log file, it is likely to grow back.

Try to run this to see how much real space in your registry is occupied.

SELECT 
    [TYPE] = A.TYPE_DESC
,[FILE_Name] = A.name
,[FILEGROUP_NAME] = fg.name
,[File_Location] = A.PHYSICAL_NAME
,[FILESIZE_MB] = CONVERT (DECIMAL (10.2), A.SIZE / 128.0)
,[USEDSPACE_MB] = CONVERT (DECIMAL (10.2), A.SIZE / 128.0 - ((SIZE / 128.0) - CAST (FILEPROPERTY (A.NAME, SPACEUSED AS AS INT) /128.0))
,[FREESPACE_MB] = CONVERT (DECIMAL (10.2), A.SIZE / 128.0 - CAST (PROPERTY OF FILE (A.NAME, & # 39; IN SPACE & # 39;) AS INT) /128.0)
,[FREESPACE_%] = CONVERT (DECIMAL (10.2), ((A.SIZE / 128.0 - CAST (FILEPROPERTY (A.NAME, & SPACEUSED & # 39;) AS INT) /128.0) / (A.SIZE / 128.0)) * 100)
,[AutoGrow] = & # 39; By & # 39; + CASE is_percent_growth WHEN 0 THEN CAST (growth / 128 AS VARCHAR (10)) + & # 39; MB - & # 39;
WHEN I DO 1 THEN (growth LIKE VARCHAR (10)) + & # 39;% - & # 39; ELSE & # 39; & # 39; FINAL
+ CASE max_size WHEN 0 THEN & # 39; DISABLED & # 39; WHEN -1 THEN & # 39; No restrictions & # 39;
ELSE & # 39; Restricted to & # 39; + CAST (max_size / (128 * 1024) AS VARCHAR (10)) + & # 39; GB & # 39; END
+ CASE is_percent_growth WHEN 1 THEN & # 39; [autogrowth by percent, BAD setting!]& ELLE & # 39; FINAL
FROM sys.database_files A JOIN LEFT sys.filegroups fg ON A.data_space_id = fg.data_space_id
sort by A.TYPE desc, A.NAME; 

If you really have a lot of free space available, you can run the DBCC SHRINKFILE command to download your log file to any size you think it should be.

BUT NEVERTHELESS Any activity that caused your log file to grow in the first place is likely to continue. Apparently, he only thinks that one record is being backed up per day.

What you should do is make multiple backup copies of records throughout the day between complete backup copies of the database. I probably recommend starting every hour and adjust to see what works best for you. You can continue doing this through maintenance plans if that is comfortable for you. Otherwise, you can use Ola Hallengren's scripts to configure a maintenance plan. There are many different options to accompany and, for the most part, all are pretty good, as long as you make frequent backups.