I have a table with 50M more data.
I need to get 4 columns and display in order of data using the & # 39; name_voter & # 39;
First, I started with a basic query like
select id, name_voter, home_street_address_1, home_address_city from base_voter where deleted_at is null order by home_street_address_1 offset @pagesize * (@pagenumber - 1) rows Option to search the following rows of @pagesize only (recompile)
With the increase in data, it began to be slow. So, I decided to create an index like
create index IX_base_voter_name_voter_asc_deleted_at in base_voter (name_voter asc, deleted_at) include (home_street_address_1, home_address_city) where deleted_at is null
It does a lot faster but not enough, so one of my friends suggests creating an additional table like:
create table base_voter_name_voter ( sr_no bigint no null, base_voter_id int, restriction pk_name_voter_sr_no main key (sr_no) ) to go
and you need to execute the following query in each insertion / deletion in the base_voter table
create procedure procUpdateSortDetails as start truncate table base_voter_name_voter Insert in base_voter_name_voter (sr_no, base_voter_id) select row_number () on (sort by name_voter asc), id from base_voter with (nolock) where deleted_at is null finish to go
After this, the execution time is acceptable, but even so, I am not convinced to execute that query in each update.
1) Is there any other way to do this faster without updating this additional table in each insert manually? Something like in the index must be updated automatically.
2) The index itself should not be enough to handle tasks like this? Am I doing it incorrectly or is this index only capable?
3) Is there any other way to address these problems, so that the query is faster even on the last page?
4) If there is also another way to represent the data so that this query is not necessary. Please, mention those.
My data table is something like
name_voter (asc) | home_street_address_1 | home_address_city
Please suggest some way to address this problem.