clustered index – Cluster is causing my query to run SLOWER then before

So I have been stuck on this for 6 hours now and I have no clue what to do. I am doing university homework that requires us to create a unoptomized sql query (does not have to make sense) and then apply index’s and see if it makes it faster (which it did for me, from 0.70 elapsed time to 0.66) and then we had to apply clusters.

I applied clusters and it has now almost doubled the amount taken to finish the query. From 0.70 to 1.15. Below is how I specified my cluster:

CREATE CLUSTER customer2_custid25 (custid NUMBER(8))  

SIZE 270

TABLESPACE student_ts; 

I tried all my previous times with INITIAL and NEXT but that seemed not to make a difference. Below are the tables:

CREATE TABLE CUSTOMER18 ( 

    CustID         NUMBER(8) NOT NULL,
    FIRST_NAME     VARCHAR2(15),
    SURNAME     VARCHAR2(15),
    ADDRESS     VARCHAR2(20),
    PHONE_NUMBER NUMBER(12)) 

    CLUSTER customer2_custid25(CustID); 

CREATE TABLE product18( 

    ProdID     NUMBER(10) NOT NULL,
    PName    Varchar2(6),
    PDesc    Varchar2(15),
    Price    Number(8),
    QOH        Number(5)); 

CREATE TABLE sales18( 

    SaleID     NUMBER(10) NOT NULL,
    SaleDate    DATE,
    Qty            Number(5),
    SellPrice    Number(10),
    CustID        NUMBER(8),
    ProdID        NUMBER(10)) 

    CLUSTER customer2_custid25(CustID); 
 

CREATE INDEX customer2_custid_clusterindxqg ON CLUSTER customer2_custid25 TABLESPACE student_ts ; 

I also tried taking the tablespace section in the cluster index away.

I followed this formula to help calculate cluster sizes:

“Size of a cluster is the size of a parent row + (size of Child row *
average number of children). “

This brought me to the size of 270. However, after testing sizes (going up 20) from 250 to 350 I found 320 to be the fastest at 1.15.


No matter what I try, I can not for the love of me get it lower then my base query times.

Other students have done the same and halved their query time.

All help is really appreciated.

SQL Server Cluster – READABLE SECONDARY

On our SQL Server cluster, I get messages when the backup runs on secondary node.

BACKUP failed to complete the command BACKUP LOG activity. Check the backup application log for detailed messages.

and

DESCRIPTION: the target database, ‘engageone’, is participating in an availability group and is currently not accessible for queries. Either data movement is suspended or the availability replica is not enabled for read access. To allow read-only access to this and other databases in the availability group, enable read access to one or more secondary availability replicas in the group. For more information, see the ALTER AVAILABILITY GROUP statement in SQL Server Books Online.

I understand that it is because the secondary DB is read only.

I would like to solve this problem.

I think if I set secondary DB as Readable Secondary = Yes or Read-intent only , after that this problem will be solved.

But I have question?

If I set parameter “Readable Secondary” to Yes or Read-intent only and secondary DB will be disconnected and someone sends query to secondary DB?

What will happen? The secondary DB will response? or not?

Of course, all our application send queries to Cluster name.

amazon web services – EC2 Instance cannot connect to ECS Cluster

Helo,
I have empty AWS ECS Cluster but I am unable to put instances into it.
I wanted to use Launch templates and Autoscaling Group, but I am unable to assign created EC2 Instance.

The issue is in shown in ecs-agent.log

level=error time=2020-10-17T23:23:37Z msg="Unable to register as a container instance with ECS: RequestError: send request failedncaused by: Post "https://ecs.eu-central-1.amazonaws.com/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" module=client.go
level=error time=2020-10-17T23:23:37Z msg="Error registering: RequestError: send request failedncaused by: Post "https://ecs.eu-central-1.amazonaws.com/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" module=agent.go

Notes:

  • Using AMI ami-0eff571a24849e852
  • Cluster name: debug
  • Region is eu-central-1
  • Instance has no public IP
  • Instance is in 10.10.100.0/24 subnet (10.10.100.14) and VPN subnet is 10.10.0.0/16
  • Instance can reach the internet through NAT Instance:
(ec2-user@ip-10-10-100-14 ecs)$ ping google.com
PING google.com (216.58.212.142) 56(84) bytes of data.
64 bytes from ams15s21-in-f14.1e100.net (216.58.212.142): icmp_seq=1 ttl=109 time=50.1 ms
64 bytes from ams15s21-in-f142.1e100.net (216.58.212.142): icmp_seq=2 ttl=109 time=40.1 ms
  • DNS to outside is resolving fine
(ec2-user@ip-10-10-100-14 ecs)$ nslookup google.com
Server:     10.10.0.2
Address:    10.10.0.2#53

Non-authoritative answer:
Name:   google.com
Address: 216.58.212.142
  • Just to be sure, I have created Endpoints from VPC and Subnet where Instance is to ECS
  • I have attached the security group with no restrictions for test
  • ecs.config:
ECS_CLUSTER=debug
ECS_BACKEND_HOST=
(ec2-user@ip-10-10-100-14 ecs)$ nslookup ecs.eu-central-1.amazonaws.com
Server:     10.10.0.2
Address:    10.10.0.2#53

Non-authoritative answer:
Name:   ecs.eu-central-1.amazonaws.com
Address: 10.10.100.219

Does anyone have any suggestions?

Redis-cli –cluster create command is not working?

I have redis-server package through standard apt command, but when I’m trying to run redis-cli –cluster create its not working. I’m getting "Unrecognized option or bad number of args for: ‘–cluster’". Please help me to fix it.

failover – How to Fail-over of MYSQL 8.0 on Red Hat Cluster without Application to Re-login & avoiding service interruption

Getting the error after every fail-over test case is run :- host is blocked because of many connection errors . We need to login back again to database and use Flush Host command to allow new connection to the database ,but this disrupts the ongoing services and defeats the whole purpose of HA .We have bought Red Hat HA Add-on licenses to create the Cluster on which MYSQL 8.0 DB has been installed . App connects to the DB via a VIP and during any DB transaction if we test the Cluster node Fail-over ,it disrupts the ongoing App jobs . Tried using Connection Time-out set to 30 secs (by this time the fail-over completes and DB resources successfully move to the second node ) and setting Connection life-cycle to 10 sec on the connection string ,but same behaviour .Set the SET GLOBAL max_connect_errors=10000 as well . Please suggest a way forward .

linux networking – Hide GKE cluster pods IP address behind single IP address in site to site VPN use case using GCP Cloud VPN

Currently I am developing nodejs application deployed to GKE cluster in google cloud platform. This application will need to call 3rd Party API which is only accessible through VPN so that I have to establish site to site VPN to the 3rd Party API provider network.

I know that site to site VPN can be implemented using GCP Cloud VPN and I have previous experience using GCP Cloud VPN. But the problem for me is this 3rd Party API will only allow one single IP address from my VPC accessing their network, which is a problem since all pods in the GKE cluster has their own ephemeral IP.

The question is that how I can make the outgoing API call from the GKE cluster to the 3rd party API comes only from one single IP address, so that the 3rd party provider admin can whitelist this single IP address to access their API?

I am thinking about using one linux VM as nat router, so that API call to the 3rd party API will go through this nat router first and then from nat router to the Cloud VPN gateway. But when I take a look at the VPC route table, I just can’t see how this method can be implemented, since in the VPC route table I can’t specify particular network segment as source. I can only set the destination and the next hop which will affect all the instances in the VPC.

Below is the link to view current topology of my VPC for reference :

Topology

Is this something can be done in GCP or maybe am I looking at the problem in the wrong way ?

Thank You

centos7 – Storage balancing in elasticsearch cluster

I’ve a Elasticsearch cluster with 5 nodes. I’ve this used storage repartition:

  • Node1: 76%
  • Node2: 94%
  • Node3: 88%
  • Node4: 73%
  • Node5: 74%

How to do to storage balancing/leveling the used storage ?

For the node 2 and 3, we arrived to the watermark threshold and the cluster stuck.

In my others cluster, the used storage is balancing/leveling. Exemple on an other cluster:

  • Node1: 61%
  • Node2: 63%
  • Node3: 60%
  • Node4: 63%
  • Node5: 62%

Thanks

Scale down of ambari cluster is not working in Openstack Train Sahara

When scaling down Ambari Cluster in Openstack Train Sahara, it is failed with error – object of type ‘filter’ has no len() in Cluster Event tab in Horizon. When doing the same using openstack client, it shows that cluster scaling down started, but it is not scaling down the cluster. in sahara-engine.log, cluster status changed to decommissioning and immediately it changes to active without showing any error.
Sahara engine log is :
2020-09-19 18:24:46.485 13644 WARNING sahara.context (-) Arguments dropped when creating context: {‘user’: ‘a89f302f93f144e29f6e1a8204e766eb’, ‘tenant’: ’66b30bbab9814815a678bd2be77ce435′, ‘system_scope’: None, ‘project’: ’66b30bbab9814815a678bd2be77ce435′, ‘domain’: None, ‘user_domain’: None, ‘project_domain’: None, ‘read_only’: False, ‘show_deleted’: False, ‘global_request_id’: None, ‘user_identity’: ‘a89f302f93f144e29f6e1a8204e766eb 66b30bbab9814815a678bd2be77ce435 – – -‘, ‘is_admin_project’: True, ‘user_name’: ‘admin’, ‘project_name’: ‘admin’, ‘client_timeout’: None}
2020-09-19 18:24:50.367 13644 INFO sahara.utils.cluster (req-b3317ce2-b72e-44e6-911f-94fb77cc4067 a89f302f93f144e29f6e1a8204e766eb 66b30bbab9814815a678bd2be77ce435 – – -) (instance: none, cluster: f3194133-331d-449d-90b7-5cbf335392a3) Cluster status has been changed. New status=Decommissioning
2020-09-19 18:24:50.368 13644 DEBUG sahara.utils.notification.sender (req-b3317ce2-b72e-44e6-911f-94fb77cc4067 a89f302f93f144e29f6e1a8204e766eb 66b30bbab9814815a678bd2be77ce435 – – -) (instance: none, cluster: f3194133-331d-449d-90b7-5cbf335392a3) Notification about cluster is going to be sent. Notification type=sahara.cluster.update _notify /usr/lib/python3/dist-packages/sahara/utils/notification/sender.py:55
2020-09-19 18:24:50.889 13644 DEBUG oslo_service.periodic_task (-) Running periodic task SaharaPeriodicTasks.update_job_statuses run_periodic_tasks /usr/lib/python3/dist-packages/oslo_service/periodic_task.py:217
2020-09-19 18:24:50.890 13644 DEBUG sahara.service.periodic (req-6c34c65b-c29d-463d-8ffd-5e35af99217b – – – – -) Updating job statuses update_job_statuses /usr/lib/python3/dist-packages/sahara/service/periodic.py:153
2020-09-19 18:24:51.344 13644 INFO sahara.utils.cluster (req-b3317ce2-b72e-44e6-911f-94fb77cc4067 a89f302f93f144e29f6e1a8204e766eb 66b30bbab9814815a678bd2be77ce435 – – -) (instance: none, cluster: f3194133-331d-449d-90b7-5cbf335392a3) Cluster status has been changed. New status=Active

how to get users in a socket.io room using redis-socket.io in a pm2 cluster

I have been looking for a solution to this but as of yet have not found one. As the title says, we are using socket.io, redis-socket.io and pm2 cluster for a chat application. This is working well however grabbing the users in a specific room only grabs the users on the node you are on.

I have seen some responses that say you can track the sockets via redis however those solutions are a few years old so I was hoping there may be a built in solution as part of the redis-socket.io project that I’m not aware of.

Essentially when a user connects to a room we call

socketID in io.nsps['/'].adapter.rooms[room].sockets

This gives us an object of all the users in that room, however in a cluster it only grabbing the users on that node so some are left out.

Any help would be appreciated.

Thanks.

how cluster cpanel server on 2 location ?

Hello
we have 1 dedicated server with cpanel and mariadb
Sometimes the server is out of reach so im thinking to have another server when s… | Read the rest of https://www.webhostingtalk.com/showthread.php?t=1822146&goto=newpost