## algorithms – How to group thousand of data points for each user

I have thousands of data points associated with users. So a single user can have 2000-10000 data points. These data points are identified by contiguous numbers (e.g. all numbers from 0 to 2000). Each data point can have 3 states: true, false, unknown.

What could I do to group these data points so that they don’t take too much space?

My initial plan is to basically cluster them. Something like:
unknown: (1-23, 25, 27)
true: (24, 26)
false: (28-30)

Where m-n represents number from m to n.

## is there a fast way to find that the uploaded image is similar to an image that exists in a specific folder that contains thousand images?

from cv2 import cv2
import numpy as np
import os
import io

app.config(“DEBUG”) = True
path = ‘images’

@app.route(‘/check’, methods=(‘POST’))
def api_id():
npimg = np.frombuffer(f,np.uint8)
myList = os.listdir(path)
print(‘Total Classes Detected’, len(myList))
for cl in myList:

# Initiate BRISK descriptor
BRISK = cv2.BRISK_create()

# Find the keypoints and compute the descriptors for input and training-set image
keypoints1, descriptors1 = BRISK.detectAndCompute(image_to_compare, None)
keypoints2, descriptors2 = BRISK.detectAndCompute(captured, None)

# create BFMatcher object
BFMatcher = cv2.BFMatcher(normType = cv2.NORM_HAMMING,
crossCheck = True)

# Matching descriptor vectors using Brute Force Matcher
matches = BFMatcher.match(queryDescriptors = descriptors1,
trainDescriptors = descriptors2)

# Sort them in the order of their distance
matches = sorted(matches, key = lambda x: x.distance)

number_keypoints = 0
if len(keypoints1) = 50 :
r1 = {“ID”: os.path.splitext(cl)(0) ,”SIMILAR”:”true” }
return jsonify(r1)

r2 = {“SIMILAR”: “false”}
return jsonify(r2)

app.run()

## sql server – Monitoring Query which executes thousand of run/minute and is generally fast

I am looking for some advise here on one of our SQL database server with below behavior-

Queries running for this databases are generally considered good with avg run time of 20 ms.

Suddenly on some weird days it will go from 20 ms to 80 ms and its very hard for our monitoring process to capture which time exactly it shifted its run time and why-

Our current monitoring method include below 2:-

1. DMV’s from cached TOP SQL queries which does not help much as metrics are cumulative so its hard to find the point in time where issue happened and plan or something changed.

2. Extended events for RPC COMPLETED, SP statement completed, SQL batch completed and SQL statement completed, but only for queries running over 5 secs.

We have not looked into QUER STORE yet due to some issues i am reading online because this DB is SQL2017 with AG setup in 2 DC configuration. Though there are trace flags for some but our Eng team is hesitant still to use QS on this server which is high OLTP with batch req/sec on avg 30-40K/ secs/

Please suggest or advise if i am missing anything except third party monitoring tool?

## Bitcoin Stealer – Now Make Thousand Of Money Easily {HOT} {TESTED} | Proxies123.com

Hello Mates,

I found this method by contacting this guy on this forum, I thought I
might share this. What it does is basicly you register one account and
use the discribed technique for good income & free money on bitcoins,
no user interaction needed.

Checkout here :
http://www.bitshacking.com/forum/black-hat-money-making/46697-hot-bitcoin-stealer-now-make-thousand-money-easily.html

Thanks and just let this bot be on of your side bots to Make much US

## Am receiving more than thousand mails in single day from ‘sample@email.tst’ continuously

Am using ‘WP Offload SES Lite’ Plugin to collect Question & Answer through forms. But yesterday I was receiving thousands of mail in a single day continuously. I think someone tried to hack the site. Can you please tell me how to protect from these kinds of attacks?

## sql server – Reorganizing/rebuilding indexes on 460 THOUSAND tables?

I’ve been tasked with maintaining an old (SQL 2005; no, I can’t upgrade it!) server which has 461,000 tables and 1.15M indexes. How do I go about maintaining those indexes?

My first thought is to create a list (stored in a table) of indices, having these attributes:

• schema name
• table name
• index name
• page count (null at first)
• frag pct (null at first)
• date rebuilt/reorganized (null at first)

From there, I would — in as many tables as I can analyze each night — query sys.dm_db_index_physical_stats and update the table. Eventually I’d have a list of all tables and their frag statistics. After that, I can defrag (reorg or rebuild as necessary) all the indexes which I deem require it.

Are there any better ways to do this?

## [ Politics ] Open Question : Why does the Morbidly Obese Clown continue to tweet? Does he not realize that NOBODY respects him! 100 thousand dead & 40 mill unemployed?

Morbidly Obese Clown FAILED! He has NO testing! NO ventilators. No supplies for hospitals! NO plan! And all he can do is try and entice violence! Morbidly Obese Clown needs to sit his fat A*S*S down on his unicycle and fart his way into hell where he belongs with Jeffrey Epstein and Bill Barr!

## gmail – How to delete 100 thousand emails all at once?

I have this about 100 thousand emails in my Gmail, good thing it is under the “All Mail” label.

How can I delete all my emails at once?

The Gmail interface can only display 100 emails per page, what’s the workaround to delete all emails at once?

## [ Politics ] Open Question : The world has 7 thousand million people & the USA has just 330 million. Yet the USA has 1/3 of the total coronavirus deaths in the world?

Why has Donald Trump FAILED so MISERABLY to protect the lives of Americans?
Trump prefers to go off golfing. What a complete moron he is. An utter FAILURE.