query performance – SQL Server – Prevent Clustered Index Scan on a specific table

My database contains a specific table that is quite large (250+K rows, 100+GB data space).

Generating a clustered index scan on this table is always a bad idea.

This table has multiple indexes and we never run a query on this table without specifying predicates on indexed columns.

Since we migrated to SQL Server 2016 and activated the new cardinality estimator, we have experienced random query plan degradations. Queries that would run in a few seconds randomly start to timeout after multiple dozens of minutes.

Of course, I can easily solve this issue for one specific query by adding hints, but I have too many queries in my applications to broadly apply this solution.

Is there a way to globally prevent SQL Server from generating Clustered Index Scan on this table?

google search console – No discovered URLs and index covered button is disabled

I am using Rank Math as an SEO plugin
As per there recommendations as well as google recommendations, I should only add the index XML file

But I got discovered URLs= 0 and index coverage button is disabled

enter image description here

PS: the site map index has been added 1 month ago!

html – Undefined index: module error in php

ecently I have made a form code and attached modules to it. but soon after I have reached an error saying
Notice: Undefined index: module in F:websitewebpagegate.php on line 68

here is the line that was mentioned

`switch ($_GET("module")) {
case "settings":

I am also mentioning the code in the that I showed up ib the link

<a class="nav-link" href="gate.php?module=settings"> Settings </a> please rectify this error for me and sorry for any in trouble.

html – Undefined index: module error in php

recently I have made a form code and attached modules to it. but soon after I have reached an error saying
Notice: Undefined index: module in F:websitewebpagegate.php on line 68

here is the line that was mentioned

switch ($_GET("module")) {
case "settings":

I am also mentioning the code in the that I showed up ib the link

<a class="nav-link" href="gate.php?module=settings"> Settings </a>
please rectify this error for me and sorry for any in trouble.

sql server – How to control Segmentation min/max data_id on a non-clustered ColumnStore index

Given a simple row-based table without a PK but with a row-based clustered index like so:

create clustered index (CX_PropertyValue) ON (dbo).(EntityValue) ((PropertyId), (Value))

Then I wish to add a column store index that is segmented in the same order as the clustered index above:

create nonclustered columnstore index CS_IX_EntityValue on dbo.PropertyValue( 
    PropertyId, Value
with (drop_existing = on, maxdop = 1); -- maxdop=1 to preserve the order by property 

MaxDop hint to preserve order came from: here

Then the following query was used to report the min/max data_id for the PropertyId column and it the full range was reported on each of the 7 segments:

-- (Warning: This query is not joined quite right in that it may report the wrong table column 
-- name but the min/max data itself makes it obvious which column is PropertyId for this case).
select top 20000000000
       s.Name as SchemaName, 
       t.Name as TableName,
       c.name as ColumnName,
       c.column_id as ColumnId,
       cs.segment_id as SegmentId,
       cs.min_data_id as MinValue,
       cs.max_data_id as MaxValue
  from sys.schemas s
  join sys.tables t
    on t.schema_id = s.schema_id
  join sys.columns c
    on c.object_id = t.object_id
  join sys.partitions as p  
    on p.object_id = t.object_id
  join sys.column_store_segments cs
    on cs.hobt_id = p.hobt_id
   and cs.column_id = c.column_id
 order by s.Name, t.Name, c.Name, cs.Segment_Id

I tried making the clustered index unique which did slightly affect the reported ranges but still was not monotonically increasing.

Any ideas?

Here is a Link that accomplished the segmentation in this manner but I don’t see any difference.

MariaDB reset table index value?

I was wondering what to do when for example your table is reaching it’s maximum index values, I was using this Query to determine the index usage per table:

  IF (c.MAX_VALUE > 0, ROUND(100 * t.AUTO_INCREMENT / c.MAX_VALUE, 2), -1) AS "Usage (%)" 
        WHEN COLUMN_TYPE LIKE 'tinyint(1)' THEN 127
        WHEN COLUMN_TYPE LIKE 'tinyint(1) unsigned' THEN 255
        WHEN COLUMN_TYPE LIKE 'smallint(%)' THEN 32767
        WHEN COLUMN_TYPE LIKE 'smallint(%) unsigned' THEN 65535
        WHEN COLUMN_TYPE LIKE 'mediumint(%)' THEN 8388607
        WHEN COLUMN_TYPE LIKE 'mediumint(%) unsigned' THEN 16777215
        WHEN COLUMN_TYPE LIKE 'int(%)' THEN 2147483647
        WHEN COLUMN_TYPE LIKE 'int(%) unsigned' THEN 4294967295
        WHEN COLUMN_TYPE LIKE 'bigint(%)' THEN 9223372036854775807
        WHEN COLUMN_TYPE LIKE 'bigint(%) unsigned' THEN 0
        ELSE 0
     WHERE EXTRA LIKE '%auto_increment%'
   ) c


 c.TABLE_SCHEMA = 'Database_Name'
 `Usage (%)` DESC;

Which would return something like this:

| TABLE_NAME             | COLUMN_TYPE | MAX_VALUE  | AUTO_INCREMENT | Usage (%) |
| app_crontasks          | int(11)     | 2147483647 |        1536304 |      0.07 |
| app_alerts             | int(11)     | 2147483647 |              1 |      0.00 |
| app_apiclients         | int(11)     | 2147483647 |              2 |      0.00 |
| app_replicates         | int(11)     | 2147483647 |              1 |      0.00 |
| ...                    | ...         | ...        | ...            | ...       |

In case it fills up to, for example, 75% then we would need to do a clean up of the database? How am I able to safely do that? Would it affect the foreign keys assigned?

locking – SQL Server: When is a real shared (S, not IS) lock acquired on a page of a clustered index?

All the explanations I find seem to indicate that, without special hints – which is the case in our software -, shared locks are only acquired for keys, with IS locks at page and object level (and, yes, an S lock on the database).

Lock escalation of S row (key) locks escalates to the table (object) level, so no S page locks can result from this, if I’m correct here.

And foreign key constraint checking also gets (transaction-wide) S locks on keys, if I understand it correctly (see my other question for this).

However, we see lots of S (not IS) page locks in a simple ETL process (doing simple UPDATEs/INSERTs and some DELETEs from a connected server) – where could they come from?

Harald M.

Should I include my markdown directly in my search index or should I convert it into plain text first?

I’m having a doubt concerning my website’s search index. I write my posts in Markdown and generate my website using Hugo statically. My doubt is whether I should include my post content as plain text, in my search index, or not.
I’d personally prefer to do it as plain text since it’s much easier for the readers, but I was provided with this counter-example which has left me doubtful.

Here’s an example of what I mean by plain and direct markdown:
This is plain text. vs `This is inserted directly`.

I’m asking this because I wanted to go with the web’s conventions rather than my own preferences.

postgresql – Pre Caching Index on a large table in PostgrSQL

I have a table with about 10mln rows in it with a primary key and an index defined on it:

    create table test.test_table(
        date_info date not null,
        string_data varchar(64) not null,
        data bigint
        primary key(date_info, string_data));
        create index test_table_idx 
        on test.test_table(string_data);

I have a query that makes the use of the test_table_idx:

select distinct date_info from test.test_table where string_data = 'some_val';

The issue is that first time around it could take up to 20 seconds to run the query and < 2 seconds on any subsequent runs.

Is there a way to pull load the entire index into memory rather then have DB load information on first access?

linux – Methods for tracking processing time for long running ADD INDEX call in MySQL

I’ve set off index creation on a very large table in MySQL and while I expected it to take a long time, I’m 5 days in and wondering if there’s any way to debug potential issues or simply let it run. I don’t have a precise row count but to estimate, it’s in the 100s of billions of rows and the table is ~400GB on disk. Neither memory or CPU usage appears to be overly taxed (mem ~8GB (out of 16GB total)).

The call I made from within MySQL is as follows:

alter table prices add index(dataDate, ticker, expDate, type), add index(s
ymbol), algorithm=inplace, lock=none;

Running show processlist from within a different MySQL instance shows the call with State ‘altering table’ so the call doesn’t appear blocked. Anything else I can check to gauge progress?

For reference I’m working with MySQL 8 and within Ubuntu 18.04