Implementing Unique Index by ignoring similar rows in PostgreSQL

I am currently working on creating an append-only PostgreSQL table where I want to insert a row every time a business object is updated or deleted.

One issue I have is to replicate the guarantees provided by a unique index. Here is an example for a standard table:

create table users (
  id uuid DEFAULT uuid_generate_v4 (),
  email text not null unique
);

In my project I’d like to instead create a user_versions table like this:

create table user_versions (
  id uuid DEFAULT uuid_generate_v4 (),
  original_id uuid not null,
  email text not null, //I cannot make it unique here because I want to save multiple versions
);

The unique index generated in the users case ensures perfect guarantees with good performance.

For the user_versions table, I am looking for a solution to ensure that :

  • We can insert multiple rows with the same (unique_id, email)
  • We can’t insert another row if an email already exists in another row with a different unique_id

In other words, here is what I’d like to reproduce:

insert into users (email) values (joe@dev.null); //works
insert into users (email) values (joe@dev.null); // does not work

insert into user_versions (email, original_id) values (joe@dev.null, xxxx-xxxx-xxxx-1); // works
insert into user_versions (email, original_id) values (joe@dev.null, xxxx-xxxx-xxxx-1); // works
insert into user_versions (email, original_id) values (joe@dev.null, xxxx-xxxx-xxxx-2); // does not work

Does anyone know how to implement this use case? I’d like to avoid using transactions for that

typescript – aws-cdk ignoring tsconfig paths and typeroots

I have a project making use of AWS-CDK all written in TypeScript.

I have a local folder called typings which contains my “global” types – there are no imports/exports there.

I am able to use these types anywhere in my code except in my cdk app definitions. Meaning I can make a cdk stack that contains a lambda with these non imported types but I cannot use these types directly in my cdk app build files. TS just says it cannot find them.

I had a similar issue with jest (ts-jest) where it could not find the paths. This was resolved using a module that assists in this import 'tsconfig-paths/register' and I suspect CDK has the same issue – it disregards my tsconfig. I am using this import 'tsconfig-paths/register' to deal with paths as well but no idea how to fix it not finding my types. While I haven’t yet tried it I also suspect jest will have same typing issue. Seems related to running cli scripts.

tsconcig.json

{
  "extends": "@tsconfig/node12/tsconfig.json",
  "compilerOptions": {
    "target": "ES2018",
    "module": "commonjs",
    "lib": ("es2018"),
    "declaration": true,
    "strict": true,
    "strictNullChecks": true,
    "alwaysStrict": true,
    "noImplicitAny": false,
    "noImplicitReturns": false,
    "noImplicitThis": false,
    "noUnusedLocals": false,
    "noUnusedParameters": false,
    "noFallthroughCasesInSwitch": false,
    "inlineSourceMap": true,
    "inlineSources": true,
    "experimentalDecorators": true,
    "strictPropertyInitialization": false,
    "resolveJsonModule": true,
    "outDir": "dist",
    "baseUrl": ".",
    "paths": {
      "lib/*": ("src/lib/*"),
      "handlers/*": ("src/handlers/*"),
      "config/*": ("src/config/*")
    },
    "typeRoots": (
      "node_modules/@types",
      "typings"
    ),
    "esModuleInterop": true,
    "skipLibCheck": true,
    "forceConsistentCasingInFileNames": true
  }
}

Sample cdk app:

import 'tsconfig-paths/register'

const app = new App()

// leter in some stack
addEventRule(events: EventType()) { // EventType is a custom declaration
  
  new Rule(this, 'ModuleLambdaBuilderRule', {
    ...
  }).addTarget(...)

My custom type in typings/common.ts

Type EventType = 'foo' | 'bar'

In the above – I cannot use EventType anywhere in the cdk files. But I can use it fine in any of my lambda code for example.

How can I resolve this? I admit I am a bit lost in the current process of doing things – evidently typeRoots is a patch I am not supposed to really use. I tried adding all my files using includes and files tsconfig option but no difference.

microsoft excel – I would like to find all unique distinct values in a row of cells, ignoring blanks

I would like to cobine together all unique distinct values from a row of cells into one cell.

(1): https://i.stack.imgur.com/gFxz8.png (Example)(1)

I have tried the code below, but it adds an additional comma for blanks. Is there a way to fix this code to account for blanks.

Function CombineUnique(xRg As Range, xChar As String) As String 

Dim xCell As Range
Dim xDic As Object
Set xDic = CreateObject("Scripting.Dictionary")
For Each xCell In xRg
    xDic(xCell.Value) = Empty
Next
CombineUnique = Join$(xDic.Keys, xChar)
Set xDic = Nothing
End Function  

Thank you
(1): https://i.stack.imgur.com/gFxz8.png

unity – Unity2019 – PrintDocument ignoring ‘DefaultPageSettings.Landscape = true’

So I’m trying to print from Unity in landscape and while I can get it to print just not in landscape.

So I create a PrinterSettings Object in ‘void Start’ and set the printer and tray from dropdowns… and that part appears to work but it’s fully ignoring the Landscape property.

public void ShowDialog(string filePath, bool isLandscape)
{
    PdfiumViewer.PdfDocument pdfiumDoc = PdfiumViewer.PdfDocument.Load(filePath);
    _printDocument = pdfiumDoc.CreatePrintDocument();
    _printDocument.DocumentName = "Print Document";

    _isLandscape = isLandscape;

    _printPanel.SetActive(true);
}

public void Print()
{
    int index = _paperSourcesStrings.IndexOf(SelectedPaperSource);

    _printDocument.PrinterSettings = _printerSettings;
    _printDocument.DefaultPageSettings.Landscape = _isLandscape;

    if (index != -1)
    {
        _printDocument.DefaultPageSettings.PaperSource = _paperSources(index);
    }

    _printDocument.Print();

    _printPanel.SetActive(false);
}

MySQL takes huge memory by ignoring buffer pool size

I am facing a very strange problem, looks like on mysql startup, mysql is taking so much memory than 4x of buffer pool size.

I am using Ubuntu VM (4 Core 8 GB) with MySQL 5.6.33. /etc/mysql/my.cnf is as below:

(client)
port        = 3306
socket      = /var/run/mysqld/mysqld.sock

(mysqld_safe)
socket      = /var/run/mysqld/mysqld.sock
nice        = 0

(mysqld)
user        = mysql
pid-file    = /var/run/mysqld/mysqld.pid
socket      = /var/run/mysqld/mysqld.sock
port        = 3306
basedir     = /usr
datadir     = /datafiles/mysql
tmpdir      = /tmp
lc-messages-dir = /usr/share/mysql
skip-external-locking
bind-address        = 0.0.0.0
key_buffer      = 128M
max_allowed_packet  = 128M
thread_stack        = 192K
thread_cache_size       = 8
wait_timeout        = 300
interactive_timeout = 600
max_connect_errors = 1000000
open-files-limit = 1024000
transaction-isolation   = READ-COMMITTED
myisam-recover-options  = BACKUP
max_connections        = 25000
query_cache_limit   = 1M
query_cache_size        = 4M
general_log_file        = /dblogs/audit/general_log.log
general_log             = OFF
log_error = /dblogs/error.log
slow_query_log = ON
slow_query_log_file = /dblogs/SLOW.log
long_query_time =2 
min_examined_row_limit = 5000
server-id       = 1
log_bin         = /dblogs/binarylogs/mysql-bin.log
expire_logs_days    = 10
max_binlog_size   = 10M
binlog_format       = MIXED
innodb_strict_mode = OFF
sql_mode = NO_ENGINE_SUBSTITUTION
innodb_file_format = barracuda
innodb_file_format_max = barracuda
innodb_file_per_table = 1
innodb_data_home_dir        = /datafiles/mysql
innodb_buffer_pool_size = 300M
innodb_buffer_pool_instances = 1
innodb_open_files = 6000
innodb_log_file_size = 512M
innodb_log_buffer_size = 64M
innodb_lock_wait_timeout    = 600
innodb_io_capacity  = 400
innodb_flush_method = O_DSYNC
innodb_flush_log_at_trx_commit = 2
innodb_write_io_threads = 2
innodb_read_io_threads = 2
innodb_log_files_in_group = 2
innodb_monitor_enable = all
join_buffer_size=256K
sort_buffer_size=256K

(mysqldump)
quick
quote-names
max_allowed_packet  = 16M

(mysql)
#no-auto-rehash # faster start of mysql but no tab completition

(isamchk)
key_buffer      = 16M

!includedir /etc/mysql/conf.d/

As, per config file innodb_buffer_pool_size is set 300M and innodb_buffer_pool_instances is set to 1 (no need, as buffer pool is < 1G). error log file also confirms that buffer pool is set to 300.0M

ERROR.log

2020-09-24 01:50:06 4886 (Note) Plugin 'FEDERATED' is disabled.
2020-09-24 01:50:06 4886 (Note) InnoDB: Using atomics to ref count buffer pool pages
2020-09-24 01:50:06 4886 (Note) InnoDB: The InnoDB memory heap is disabled
2020-09-24 01:50:06 4886 (Note) InnoDB: Mutexes and rw_locks use GCC atomic builtins
2020-09-24 01:50:06 4886 (Note) InnoDB: Memory barrier is not used
2020-09-24 01:50:06 4886 (Note) InnoDB: Compressed tables use zlib 1.2.8
2020-09-24 01:50:06 4886 (Note) InnoDB: Using Linux native AIO
2020-09-24 01:50:06 4886 (Note) InnoDB: Using CPU crc32 instructions
2020-09-24 01:50:06 4886 (Note) InnoDB: Initializing buffer pool, size = 300.0M
2020-09-24 01:50:06 4886 (Note) InnoDB: Completed initialization of buffer pool
2020-09-24 01:50:06 4886 (Note) InnoDB: Highest supported file format is Barracuda.
2020-09-24 01:50:06 4886 (Note) InnoDB: 128 rollback segment(s) are active.
2020-09-24 01:50:06 4886 (Note) InnoDB: Waiting for purge to start
2020-09-24 01:50:06 4886 (Note) InnoDB: 5.6.33 started; log sequence number 4529735
2020-09-24 01:50:06 4886 (Note) Server hostname (bind-address): '0.0.0.0'; port: 3306
2020-09-24 01:50:06 4886 (Note)   - '0.0.0.0' resolves to '0.0.0.0';
2020-09-24 01:50:06 4886 (Note) Server socket created on IP: '0.0.0.0'.
2020-09-24 01:50:06 4886 (Note) Event Scheduler: Loaded 0 events
2020-09-24 01:50:06 4886 (Note) /usr/sbin/mysqld: ready for connections.
Version: '5.6.33-0ubuntu0.14.04.1-log'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  (Ubuntu)

even more show global variables like '%pool%'; also show that buffer pool is set properly:

+-------------------------------------+----------------+
| Variable_name                       | Value          |
+-------------------------------------+----------------+
| innodb_additional_mem_pool_size     | 8388608        |
| innodb_buffer_pool_dump_at_shutdown | OFF            |
| innodb_buffer_pool_dump_now         | OFF            |
| innodb_buffer_pool_filename         | ib_buffer_pool |
| innodb_buffer_pool_instances        | 1              |
| innodb_buffer_pool_load_abort       | OFF            |
| innodb_buffer_pool_load_at_startup  | OFF            |
| innodb_buffer_pool_load_now         | OFF            |
| innodb_buffer_pool_size             | 314572800      |
+-------------------------------------+----------------+

But after service start if I check using free -h or in top mysql is taking more than 75% of RAM

             total       used       free     shared    buffers     cached
Mem:          7.5G       5.3G       2.1G        28K       2.8M        77M
-/+ buffers/cache:       5.3G       2.2G
Swap:           9G       1.2G       8.8G

top command output:

PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 8071 mysql     20   0 6982340 5.741g   9260 S   0.3 77.0   0:02.19 mysqld

I checked by changing innodb_buffer_pool_size to 1G,2G,3G and innodb_buffer_pool_instances to 1,8,10, but irrespective of all these when I start MySQL service 5-6 GB of total RAM (7GB) is taken by Mysql and as soon as I stop the service RAM usages reduced to 224M.

What is this behaviour?
Did I misconfigured the MySQL?
Why Mysql service is taking more than (4x or 8x) of buffer pool size?

Errant line spacing issue with new Docs, ignoring ‘Add space after paragraph’ on page break

New documents have suddenly started exhibiting strange behaviour, and I cannot for the life of me figure it out. The main symptom is that on the page break, there is no gap between paragraphs. This is very annoying because at a glance, it makes it seem part of the previous paragraph, particularly if the text happens to align near the end of the page. However, upon closer testing and inspection, there is also a very slightly different line spacing between paragraphs, as it now fits 28 lines instead of 27.

Line Space Present

Line Space Missing

Comparison

(Turn off Print Mode once you get into the document and scroll down to the page break)

Comparison

I have been through every setting I can find. It’s not custom spacing, those are identical, it’s not the header or footer, it’s not the page size, it’s not the margins, it’s not the style, it’s not the text itself (copy+pasted from one doc to the other, even copied out and inspected the HTML output). There seems to be no reason for this. And yet, the desired style persists if I make a copy of an existing document – which I can do, it’s just annoying – which suggests it is some property of the document somewhere. It’s the same if I create the doc on desktop or through the app, and I’ve also tried setting the default styles via updating the Normal style and then setting it to the default. Completely clearing all styles still leaves a difference between the documents.

Comparing the two (otherwise identical) docs shows a change “Format page: background colour, bottom margin, left margin, right margin, size, top margin” includes the difference, although all those settings are identical.

Also reported here.

python – Detecting junctions from rough line drawings by ignoring the rough lines

My code is to detect junctions from rough line drawings by ignoring the rough lines. I have tried to do the coding but the result is not satisfied. My problem is that I couldn’t have a good result yet. Could anyone help me, please?
Part 1 – Building the CNN
importing the Keras libraries and packages
import keras
from keras.models import Sequential
from keras.layers import Convolution2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense, Dropout
from keras import optimizers
from keras.preprocessing.image import ImageDataGenerator
import cv2
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image

def myfun(image):
img = np.array(image)
grayscal = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
ret, bin_image = cv2.threshold(grayscal, 127, 255, cv2.THRESH_BINARY)
return Image.fromarray(bin_image)
Initialing the CNN

classifier = Sequential()
Step 1 -Convolution Layer
classifier.add(Convolution2D(32, (3, 3), input_shape=(28, 28, 1), activation=’relu’))
step 2 – Pooling
classifier.add(MaxPooling2D(pool_size=(2, 2)))
Adding second convolution layer
classifier.add(Convolution2D(32, (3, 3), activation=’relu’))
classifier.add(MaxPooling2D(pool_size=(2, 2)))
Adding 3rd Convolution Layer
classifier.add(Convolution2D(64, (3, 3), activation=’relu’))
classifier.add(MaxPooling2D(pool_size=(2, 2)))
Step 3 – Flattening
classifier.add(Flatten())
Step 4 – Full Connection
classifier.add(Dense(256, activation=’relu’))
classifier.add(Dropout(0.5))
classifier.add(Dense(6, activation=’softmax’))
Compiling The CNN
classifier.compile(
optimizer=optimizers.SGD(lr=0.01),
loss=’categorical_crossentropy’,
metrics=(‘accuracy’))
Part 2 Fittting the CNN to the image

train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)

test_datagen = ImageDataGenerator(rescale=1./255)

training_set = train_datagen.flow_from_directory(
‘traininglast2′,
target_size=(28, 28),
batch_size=32,
color_mode=’grayscale’,
class_mode=’categorical’)
print(training_set)
test_set = test_datagen.flow_from_directory(
‘testinglast’,
target_size=(28, 28),
batch_size=32,
color_mode=’grayscale’,
class_mode=’categorical’)

model = classifier.fit_generator(
training_set,
steps_per_epoch=10000,
epochs=50,
validation_data=test_set,
validation_steps=6799 # #number of testing images/batch size
)

”’#Saving the model
import h5py”’

classifier.save(‘model5.h5’)
print(model.history.keys())

summarize history for accuracy

plt.plot(model.history(‘accuracy’))
plt.plot(model.history(‘val_accuracy’))
plt.title(‘model.accuracy’)
plt.ylabel(‘accuracy’)
plt.xlabel(‘epoch’)
plt.legend((‘train’, ‘test’), loc=’upper left’)
plt.show()

plt.plot(model.history(‘loss’))
plt.plot(model.history(‘val_loss’))
plt.title(‘model loss’)
plt.ylabel(‘loss’)
plt.xlabel(‘epoch’)
plt.legend((‘train’, ‘test’), loc=’upper left’)
plt.show()
this is one of line drawing that I have done until now

Does using inequality (<>) or NOT in where clause lead to SQL ignoring the index?

I found this article that talks about avoiding using <> operator in where clause because optimizer ignores the index.

https://www.mssqltips.com/sqlservertutorial/3203/avoid-using-not-equal-in-where-clause/

Is this absolutely true?

What is the best way to handle this? How to avoid using <> ?

seo – Google ignoring LocalBusiness sub-categories?

I’m working for a review website, where we (of course) want a reference to our reviews to show up in the Local Business listing in the Google search results. The local businesses are various medical clinics.

When testing with the Rich Result Test tool in the search console I see there is no local business element found/listed if I use either MedicalOrganization or MedicalBusiness schemas, but if I use either LocalBusiness or Hospital there is.

I expect that for MedicalOrganization it is because it’s not a subtype of LocalBusiness, but I really don’t understand why MedicalBusiness doesn’t work, as it is a subtype and is more precise than just using LocalBusiness.

Can anyone help me understand the cause of this behaviour?


In case it’s relevant, here are a couple of examples:

  1. Clinic using the Hospital schema. Shows “Local Business” section in testing tool by default, but not if switching to MedicalOrganization or MedicalBusiness
  2. Clinic using MedicalOrganization. Does not show “Local Business” section in testing tool by default or switching to MedicalBusiness, but does show it when set to LocalBusiness or Hospital.

php fpm – Nginx is ignoring root directive in config file

I’m moving a site from one server to another as I’ve done a dozen times, but this time I’ve done something wrong.

Nginx is processing this request but ignoring the root directive. Instead of serving the content located at /var/www/html/example.com/index.php it’s serving the content /var/www/html/notexample.com/index.php.

I think the document_root in fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; isn’t getting updated, but I don’t understand why I haven’t run into this problem before.

server {
        listen 8080 http2;
        listen [::]:8080 http2;
        server_name  example.com www.example.com;

        location / {
                try_files $uri  $uri $uri/ /index.php?$args;
        }

        root    /var/www/html/example.com/; #This gets completely ignored and nginx serves the root of a different server block

        index   index.php index.html;

        location ~ .php$ {
                try_files $uri =404;
                fastcgi_pass unix:/run/php-fpm/www.sock;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
        }

        ssl on;
        ssl_certificate /path/to/ssl/example.com.fullchain.crt;
        ssl_certificate_key /path/to/ssl/example.com.key;
}

Where do I need to check to get nginx to serve the directory located at /var/www/html/example.com/?

The use of port 8080 is deliberate while I get things configured and working. Presently port 443 is successfully proxying traffic from the original server.