OpenStack Instances not running after server reboot

After a power cut at our premise, I tried to start the instances at OpenStack, but none of them started up successfully. What could be the issue?

Thanks,

openstack – Connection timeouts from Nova_compute to Keystone, RabbitMQ etc

I’ve working on (and off) a deployment of Openstack over the past few months (nearly a year), and I’ve come across a number of issues during the deployment, most of which was either bad switch configuration, or a bad configuration on the heat templates.

I’ve been able to complete a successful deployment of Openstack multiple times with a fresh deployment, however as I was preparing the Overcloud with projects, I was unable to create an instance. From the output of “compute service list”:

openstack compute service list
+----+----------------+----------------------+----------+---------+-------+----------------------------+
| ID | Binary         | Host                 | Zone     | Status  | State | Updated At                 |
+----+----------------+----------------------+----------+---------+-------+----------------------------+
|  1 | nova-conductor | controller-0.host.cp | internal | enabled | up    | 2021-04-20T20:43:03.000000 |
|  2 | nova-scheduler | controller-0.host.cp | internal | enabled | up    | 2021-04-20T20:43:01.000000 |
| 12 | nova-compute   | compute-0.host.cp    | nova     | enabled | down  | 2021-04-20T09:47:52.000000 |
+----+----------------+----------------------+----------+---------+-------+----------------------------+

I’ve also noticed that I attempted a scale out with one additional node, but it’s not present in the list above, or in the “hypervisor list”, but it is visible from a “server list” from the undercloud node:

openstack server list
+--------------------------------------+--------------+--------+-----------------------+----------------+-----------+
| ID                                   | Name         | Status | Networks              | Image          | Flavor    |
+--------------------------------------+--------------+--------+-----------------------+----------------+-----------+
| 5cb29129-7ce8-439a-b00b-3868d5a9aa74 | compute-1    | ACTIVE | ctlplane=10.128.0.136 | overcloud-full | baremetal |
| 58c3d587-d2a8-4601-87a7-3fd3d32a78b6 | controller-0 | ACTIVE | ctlplane=10.128.0.5   | overcloud-full | baremetal |
| 288dde8f-5664-42b2-b9f4-333992964dde | compute-0    | ACTIVE | ctlplane=10.128.0.75  | overcloud-full | baremetal |
+--------------------------------------+--------------+--------+-----------------------+----------------+-----------+

I’ve carried out 2 fresh installs, and I’m now faced with the following issue for all compute services that are intended to connect to the Controller node:

2021-04-23 22:28:37.891 7 ERROR nova keystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to http://10.127.2.8:5000/v3/auth/tokens: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',))

A manual curl from the compute node to the keystone endpoint yields the following (expected) output:

curl http://10.127.2.8:5000/v3/auth/tokens
{"error":{"code":401,"message":"The request you have made requires authentication.","title":"Unauthorized"}}

I don’t believe that this is something in the network stack that’s causing this issue, and is instead something else. I’d appreciate any assistance with this.

Deployment Information:
Controller Nodes = 1
Compute nodes = 2 deployed, 4 introspected
OS = CentOS Steam 8 (both undercloud and overcloud)
Networking:

  • 4 Interfaces: 1 primary, 2 port bond (OVS + LACP), 1 storage port
  • 2 Juniper EX3400’s clustered (LACP configured on bonded ports)

Let me know if any further information is required.

ubuntu 18.04 – Openstack: The ext4 file system creation in partition #1 of Virtual disk 1 (vda) failed

I followed the official devstack All-In-One Single Machine installation and after everything installed, when I create an instance with 1GB ram and 10 GB hard disk of ubuntu 18.04 desktop
the messenge show up “The ext4 file system creation in partition #1 of Virtual disk 1 (vda) failed”. My environment is 200 GB hard disk and 10 GB ram with 4 CPU. What is the problem and what should do? Thanks!

linux – How to Upgrade OpenStack Kolla Train to Ussuri

we have several OpenStack Clusters that need routine upgrades, considering that we have multiple clusters that need to be upgraded and maintain the same networking and IP addresses without impacting OpenStack Virtual Machines, what is generally best practice.

Example:

[XXXXXXXXXXXXXX@chrnc-dev-os-build-01 ~]$ cat /etc/hosts | awk '{print $3}'
localhost.localdomain
localhost.localdomain
chrnc-dev-os-control-01
chrnc-dev-os-control-02
chrnc-dev-os-control-03
chrnc-dev-os-cephcompute-01
chrnc-dev-os-cephcompute-02
chrnc-dev-os-cephcompute-03
chrnc-dev-os-cephcompute-04
chrnc-dev-os-cephcompute-05
chrnc-dev-os-cephcompute-06
chrnc-dev-os-cephcompute-07
chrnc-dev-os-cephcompute-08
chrnc-dev-os-compute-01
chrnc-dev-os-compute-02

Each node hosts docker images and containers as such:

Controller Nodes:

[root@chrnc-dev-os-control-01 ~]# docker ps
CONTAINER ID        IMAGE                                                                               COMMAND                  CREATED             STATUS              PORTS               NAMES
888f50d8cea3        albertbraden/horizon_adjutant_ussuri:nosignup                                       "dumb-init --single-…"   6 weeks ago         Up 5 weeks                              horizon
2f055899ab3d        albertbraden/adjutant_ussuri                                                        "dumb-init --single-…"   7 weeks ago         Up 6 weeks                              adjutant_api
d98d446c8a62        harbor.chtrse.com/kolla/centos-source-prometheus-alertmanager:train-dev             "dumb-init --single-…"   7 months ago        Up 2 months                             prometheus_alertmanager
473be43105e3        harbor.chtrse.com/kolla/centos-source-masakari-engine:train-dev                     "dumb-init --single-…"   8 months ago        Up 8 months                             masakari_engine
232a62be8133        harbor.chtrse.com/kolla/centos-source-masakari-api:train-dev                        "dumb-init --single-…"   8 months ago        Up 8 months                             masakari_api
5cd905688a06        harbor.chtrse.com/kolla/centos-source-grafana:train-dev                             "dumb-init --single-…"   8 months ago        Up 6 weeks                              grafana
5b237226aabd        harbor.chtrse.com/kolla/centos-source-heat-engine:train-dev                         "dumb-init --single-…"   8 months ago        Up 8 months                             heat_engine
88b982a5c9d3        harbor.chtrse.com/kolla/centos-source-heat-api-cfn:train-dev                        "dumb-init --single-…"   8 months ago        Up 8 months                             heat_api_cfn
2aed08435a61        harbor.chtrse.com/kolla/centos-source-heat-api:train-dev                            "dumb-init --single-…"   8 months ago        Up 8 months                             heat_api
8683f4b0751f        harbor.chtrse.com/kolla/centos-source-neutron-bgp-dragent:train-dev                 "dumb-init --single-…"   8 months ago        Up 5 weeks                              neutron_bgp_dragent
162e5254bfd2        harbor.chtrse.com/kolla/centos-source-neutron-metadata-agent:train-dev              "dumb-init --single-…"   8 months ago        Up 5 weeks                              neutron_metadata_agent
6c3213bd07be        harbor.chtrse.com/kolla/centos-source-neutron-l3-agent:train-dev                    "dumb-init --single-…"   8 months ago        Up 5 weeks                              neutron_l3_agent
a8e8966f7b7b        harbor.chtrse.com/kolla/centos-source-neutron-dhcp-agent:train-dev                  "dumb-init --single-…"   8 months ago        Up 5 weeks                              neutron_dhcp_agent
5bf6a261f7bb        harbor.chtrse.com/kolla/centos-source-neutron-openvswitch-agent:train-dev           "dumb-init --single-…"   8 months ago        Up 5 weeks                              neutron_openvswitch_agent
021ad687fbfa        harbor.chtrse.com/kolla/centos-source-neutron-server:train-dev                      "dumb-init --single-…"   8 months ago        Up 5 weeks                              neutron_server
f92dbecfe2c3        harbor.chtrse.com/kolla/centos-source-openvswitch-vswitchd:train-dev                "dumb-init --single-…"   8 months ago        Up 8 months                             openvswitch_vswitchd
5b1b1790fcef        harbor.chtrse.com/kolla/centos-source-openvswitch-db-server:train-dev               "dumb-init --single-…"   8 months ago        Up 8 months                             openvswitch_db
6671657396ee        harbor.chtrse.com/kolla/centos-source-nova-novncproxy:train-dev                     "dumb-init --single-…"   8 months ago        Up 6 weeks                              nova_novncproxy
e093cd250035        harbor.chtrse.com/kolla/centos-source-nova-conductor:train-dev                      "dumb-init --single-…"   8 months ago        Up 6 weeks                              nova_conductor
502677f79280        harbor.chtrse.com/kolla/centos-source-nova-api:train-dev                            "dumb-init --single-…"   8 months ago        Up 6 weeks                              nova_api
3b5720342157        harbor.chtrse.com/kolla/centos-source-nova-scheduler:train-dev                      "dumb-init --single-…"   8 months ago        Up 6 weeks                              nova_scheduler
2439f149a771        harbor.chtrse.com/kolla/centos-source-placement-api:train-dev                       "dumb-init --single-…"   8 months ago        Up 6 weeks                              placement_api
6a8972b931e8        harbor.chtrse.com/kolla/centos-source-cinder-backup:train-dev                       "dumb-init --single-…"   8 months ago        Up 8 months                             cinder_backup
6dac6aa63dfc        harbor.chtrse.com/kolla/centos-source-cinder-volume:train-dev                       "dumb-init --single-…"   8 months ago        Up 5 weeks                              cinder_volume
3eb1bc3d81e4        harbor.chtrse.com/kolla/centos-source-cinder-scheduler:train-dev                    "dumb-init --single-…"   8 months ago        Up 8 months                             cinder_scheduler
34439cf3cff3        harbor.chtrse.com/kolla/centos-source-cinder-api:train-dev                          "dumb-init --single-…"   8 months ago        Up 8 months                             cinder_api
31ad7a0c33fd        harbor.chtrse.com/kolla/centos-source-glance-api:train-dev                          "dumb-init --single-…"   8 months ago        Up 8 months                             glance_api
9f3cc810d760        harbor.chtrse.com/kolla/centos-source-keystone-fernet:train-dev                     "dumb-init --single-…"   8 months ago        Up 6 weeks                              keystone_fernet
7c60a8bdbed1        harbor.chtrse.com/kolla/centos-source-keystone-ssh:train-dev                        "dumb-init --single-…"   8 months ago        Up 6 weeks                              keystone_ssh
49dbe16d37cd        harbor.chtrse.com/kolla/centos-source-keystone:train-dev                            "dumb-init --single-…"   8 months ago        Up 6 weeks                              keystone
e622f979652d        harbor.chtrse.com/kolla/centos-source-rabbitmq:train-dev                            "dumb-init --single-…"   8 months ago        Up 5 weeks                              rabbitmq
168f8fe24653        harbor.chtrse.com/kolla/centos-source-prometheus-blackbox-exporter:train-dev        "dumb-init --single-…"   8 months ago        Up 2 months                             prometheus_blackbox_exporter
c1e2e25af70a        harbor.chtrse.com/kolla/centos-source-prometheus-elasticsearch-exporter:train-dev   "dumb-init --single-…"   8 months ago        Up 2 months                             prometheus_elasticsearch_exporter
e8002beabf0f        harbor.chtrse.com/kolla/centos-source-prometheus-openstack-exporter:train-dev       "dumb-init --single-…"   8 months ago        Up 2 months                             prometheus_openstack_exporter
1672bd12c7f3        harbor.chtrse.com/kolla/centos-source-prometheus-cadvisor:train-dev                 "dumb-init --single-…"   8 months ago        Up 2 months                             prometheus_cadvisor
f904685d923b        harbor.chtrse.com/kolla/centos-source-prometheus-memcached-exporter:train-dev       "dumb-init --single-…"   8 months ago        Up 2 months                             prometheus_memcached_exporter
ce6cb7b1b124        harbor.chtrse.com/kolla/centos-source-prometheus-haproxy-exporter:train-dev         "dumb-init --single-…"   8 months ago        Up 2 months                             prometheus_haproxy_exporter
706c8a587c0a        harbor.chtrse.com/kolla/centos-source-prometheus-mysqld-exporter:train-dev          "dumb-init --single-…"   8 months ago        Up 2 months                             prometheus_mysqld_exporter
d0eb28e297c0        harbor.chtrse.com/kolla/centos-source-prometheus-node-exporter:train-dev            "dumb-init --single-…"   8 months ago        Up 2 months                             prometheus_node_exporter
b4e85f4b9a9f        harbor.chtrse.com/kolla/centos-source-prometheus-server:train-dev                   "dumb-init --single-…"   8 months ago        Up 7 days                               prometheus_server
1f50adee590f        harbor.chtrse.com/kolla/centos-source-memcached:train-dev                           "dumb-init --single-…"   8 months ago        Up 6 weeks                              memcached
6d1996f4a9b2        harbor.chtrse.com/kolla/centos-source-mariadb:train-dev                             "dumb-init -- kolla_…"   8 months ago        Up 8 months                             mariadb
fd098e1b5dfd        harbor.chtrse.com/kolla/centos-source-kibana:train-dev                              "dumb-init --single-…"   8 months ago        Up 8 months                             kibana
9523047b56f9        harbor.chtrse.com/kolla/centos-source-elasticsearch:train-dev                       "dumb-init --single-…"   8 months ago        Up 5 months                             elasticsearch
e4c83ccf2c97        harbor.chtrse.com/kolla/centos-source-keepalived:train-dev                          "dumb-init --single-…"   8 months ago        Up 8 months                             keepalived
436f1ea27825        harbor.chtrse.com/kolla/centos-source-haproxy:train-dev                             "dumb-init --single-…"   8 months ago        Up 3 weeks                              haproxy
c19dbc42585d        harbor.chtrse.com/kolla/centos-source-cron:train-dev                                "dumb-init --single-…"   8 months ago        Up 7 weeks                              cron
8f2bfd11511f        harbor.chtrse.com/kolla/centos-source-kolla-toolbox:train-dev                       "dumb-init --single-…"   8 months ago        Up 8 months                             kolla_toolbox
c209d362a4ac        harbor.chtrse.com/kolla/centos-source-fluentd:train-dev                             "dumb-init --single-…"   8 months ago        Up 8 months                             fluentd

We are trying to find the best way to upgrade these images without impacting the environment.
Installation occurs via openstack-ansible: https://docs.openstack.org/project-deploy-guide/kolla-ansible/latest/quickstart.html#install-kolla-ansible

Upgrade Installation Procedure is suggested: https://docs.openstack.org/kolla-ansible/latest/user/operating-kolla.html

But how does this upgrade Ceph, and what is the overall impact?

centos8 – Race condition with “openstack overcloud node import –introspect –provide ” in Victoria?

I’m trying to execute openstack overcloud node import --introspect --provide <file.json> on a fresh install, but consistently getting

Mar 11 16:59:27 victoriadirector platform-python(353569): ansible-os_tripleo_baremetal_node_introspection os_tripleo_baremetal_node_introspection: INFO Starting introspection of node 1a496655-4a8e-4600-afce-97d5dd6d9ae9
Mar 11 16:59:27 victoriadirector platform-python(353569): ansible-os_tripleo_baremetal_node_introspection os_tripleo_baremetal_node_introspection: ERROR Node 1a496655-4a8e-4600-afce-97d5dd6d9ae9 can't start introspection because: BadRequestException: 400: Client Error for url: https://10.100.4.7:13050/v1/introspection/1a496655-4a8e-4600-afce-97d5dd6d9ae9, Invalid provision state for introspection: "verifying", valid states are "('inspect failed', 'inspect wait', 'manageable', 'inspecting', 'enroll')"
Mar 11 16:59:27 victoriadirector platform-python(353569): ansible-os_tripleo_baremetal_node_introspection os_tripleo_baremetal_node_introspection: INFO Starting introspection of node 14225574-e0c8-4c77-bec1-0d52e4525b08
Mar 11 16:59:27 victoriadirector platform-python(353569): ansible-os_tripleo_baremetal_node_introspection os_tripleo_baremetal_node_introspection: ERROR Node 14225574-e0c8-4c77-bec1-0d52e4525b08 can't start introspection because: BadRequestException: 400: Client Error for url: https://10.100.4.7:13050/v1/introspection/14225574-e0c8-4c77-bec1-0d52e4525b08, Invalid provision state for introspection: "verifying", valid states are "('inspect failed', 'inspect wait', 'manageable', 'inspecting', 'enroll')"
Mar 11 16:59:27 victoriadirector platform-python(353569): ansible-os_tripleo_baremetal_node_introspection os_tripleo_baremetal_node_introspection: INFO Starting introspection of node 28c024f8-3b22-46e2-9c8c-661329fcc9c9
Mar 11 16:59:27 victoriadirector platform-python(353569): ansible-os_tripleo_baremetal_node_introspection os_tripleo_baremetal_node_introspection: ERROR Node 28c024f8-3b22-46e2-9c8c-661329fcc9c9 can't start introspection because: BadRequestException: 400: Client Error for url: https://10.100.4.7:13050/v1/introspection/28c024f8-3b22-46e2-9c8c-661329fcc9c9, Invalid provision state for introspection: "verifying", valid states are "('inspect failed', 'inspect wait', 'manageable', 'inspecting', 'enroll')"
Mar 11 16:59:27 victoriadirector platform-python(353569): ansible-os_tripleo_baremetal_node_introspection os_tripleo_baremetal_node_introspection ERROR Introspection completed with failures. 3 node(s) failed.

By the time I dump the node list

(undercloud) (stack@victoriadirector ~)$ openstack baremetal node list
+--------------------------------------+-----------+---------------+-------------+--------------------+-------------+
| UUID                                 | Name      | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+-----------+---------------+-------------+--------------------+-------------+
| 1a496655-4a8e-4600-afce-97d5dd6d9ae9 | os1srvp03 | None          | power off   | manageable         | False       |
| 14225574-e0c8-4c77-bec1-0d52e4525b08 | os1srvp04 | None          | power off   | manageable         | False       |
| 28c024f8-3b22-46e2-9c8c-661329fcc9c9 | os1srvp05 | None          | power off   | manageable         | False       |
+--------------------------------------+-----------+---------------+-------------+--------------------+-------------+

the Provisioning State is “manageable”, which suggests that it’s attempting to invoke introspection before it’s completed “verifying”.

I’m testing Train, Ussuri and Victoria, and this is only occurring in Victoria at the moment.

How to specify custom error message on openstack horizon on failure in user data script instance launch

I am trying to launch instances in openstack server using the NS with multiple VNFs into it.

Many times instance launched successfully, but there are some times that some of the application in that instance was unable to start either because of some configuration mistake of may be because of network error or openstack resource failure.

I have looked in the google, but could not found any supporting document regarding this

I want to know if there is any method to set or display custom message on horizon of openstack from the user data scripts of YAML on any failure in instance?

centos – How can I best get the mpt2 drivers back in to Centos8 [for maas images and openstack ironic/image builder]

I need the mpt2sas drivers to get Centos8 to work with my servers. Thanks to them using the rhel nutered kernel the device id’s have been removed and wont load with out using a dd disk. I have both maas and openstack ironic/image service for openstack. In order to get those drivers back on the images that those 2 systems build/upload as far as I can tell I need either a custom vanilla rpm with the proper drivers or to find a way to get those build systems to pull the dd disk in during build time.

I am not sure which option would be more realistic to accomplish. maas has the ks boot dd option which i was not able to get to work when using packer-maas I haven’t had much experience with packer or ks so it may be something I did wrong.

The other option I had no issues making the kernel but am at a loss on how to build the rpm of it as all the instructions I have found were from the 2.6 kernel era. I was expecting to have to use a local rpm repo to mirror the centos8 repo and make my vanilla kernel be listed as the latest of the kernel series.

Any suggestions on how to make this work would be greatly appreciated.

Openstack instance lost internet access after ataching floating ip

Maybe someone has same problem

I have installed Openstack Victoria on two virtualmachines (1 controller node, 1 compute node) running ubuntu 20.04. Each node has two network interfaces, mgmt network and provider network. I have created private network and i have attached it to router. With this configuration i am able to access internet.

But, when i attach floating IP to my instance, it lost internet connectivity. I can access this instance from outside, but instance cannot access network gateway. I checked it with ip netns exec ping 8.8.8.8 it is working until i attach FIP.

I think that is routing problem but i cannot find where? Do you guys have any ideas?

10.0.0.0/24 – mgmt network

10.0.2.0/24 – external (provider) network

configuration of linuxbrdige

root@compute1:/# grep -v “^#” /etc/neutron/plugins/ml2/linuxbridge_agent.ini | grep -v “^$”

(DEFAULT)

(agent)

extensions = qos

(linux_bridge)

physical_interface_mappings = provider:ens34

(network_log)

(securitygroup)

enable_security_group = true

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

(vxlan)

enable_vxlan = true

local_ip = 10.0.0.131

l2_population = true

root@controller1:/# openstack subnet show provider

| Field | Value |

| allocation_pools | 10.0.2.50-10.0.2.150 |

| cidr | 10.0.2.0/24|

| created_at | 2021-02-22T16:17:20Z |

| description | |

| dns_nameservers | 8.8.8.8|

| dns_publish_fixed_ip | None |

| enable_dhcp | True |

| gateway_ip | 10.0.2.1|

| host_routes | |

| id | 7d07101a-4696-4ff8-88bc-fa4ffde1622f |

| ip_version | 4 |

| ipv6_address_mode | None |

| ipv6_ra_mode | None |

| name | provider |

| network_id | d65d17fe-9829-44d5-bf07-1abb70f9d523 |

| prefix_length | None |

| project_id | 957f142f850240b5801023369eace69a |

| revision_number | 0 |

| segment_id | None |

| service_types | |

| subnetpool_id | None |

root@controller1:/# openstack router show router1

| Field | Value |

| admin_state_up | UP |

| availability_zone_hints | |

| availability_zones | nova |

| created_at | 2021-02-22T16:17:51Z |

| description | |

| distributed | False |

| external_gateway_info | {“network_id”: “d65d17fe-9829-44d5-bf07-1abb70f9d523”, “external_fixed_ips”: ({“subnet_id”: “7d07101a-4696-4ff8-88bc-fa4ffde1622f”, “ip_address”: “10.0.2.51”}), “enable_snat”: true} |

| flavor_id | None |

| ha | False |

| id | fa11f06e-906c-4ae9-8176-20fb74e1cacd |

| interfaces_info | ({“port_id”: “67d37c5f-1250-45e7-a003-78493921b4d6”, “ip_address”: “172.16.1.1”, “subnet_id”: “b0762924-6c7a-453f-a9b8-788e15e5f0c0”}) |

| name | router1 |

| project_id | 957f142f850240b5801023369eace69a |

| revision_number | 4 |

| routes | |

| status | ACTIVE |

root@controller1:/# ip netns

qrouter-fa11f06e-906c-4ae9-8176-20fb74e1cacd (id: 3)

qdhcp-d65d17fe-9829-44d5-bf07-1abb70f9d523 (id: 0)

qdhcp-f6a245eb-001d-47b1-8af5-38178585fe87 (id: 6)

qdhcp-0fb79928-ae24-4d85-8c58-b1acb9c8c9d2 (id: 2)

qdhcp-0ab1f94c-1e06-485c-b024-548a927a5e36 (id: 1)

root@controller1:/# ip netns exec qrouter-fa11f06e-906c-4ae9-8176-20fb74e1cacd ping 8.8.8.8

PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.

64 bytes from 8.8.8.8: icmp_seq=1 ttl=128 time=11.7 ms

— 8.8.8.8 ping statistics —

1 packets transmitted, 1 received, 0% packet loss, time 0ms

rtt min/avg/max/mdev = 11.679/11.679/11.679/0.000 ms

root@controller1:/# ip netns exec qrouter-fa11f06e-906c-4ae9-8176-20fb74e1cacd ip route

default via 10.0.2.1 dev qg-61a6ea6f-7e proto static

10.0.2.0/24 dev qg-61a6ea6f-7e proto kernel scope link src 10.0.2.51

172.16.1.0/24 dev qr-67d37c5f-12 proto kernel scope link src 172.16.1.1

So everything is working fine… And now i am attaching FIP

root@controller1:/# openstack floating ip list

| ID | Floating IP Address | Fixed IP Address | Port | Floating Network | Project |

| 8a3333a9-345d-4b2a-9d63-420f09e4c020 | 10.0.2.106| 172.16.1.236| edef7b03-25a9-43b4-9953-831539056ac3 | d65d17fe-9829-44d5-bf07-1abb70f9d523 | 957f142f850240b5801023369eace69a |

It is pingable from my local PC and i can access instance via SSH as well

but i cannot access internet from provider network

root@controller1:/# ip netns exec qrouter-fa11f06e-906c-4ae9-8176-20fb74e1cacd ping 8.8.8.8

PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.

^C

— 8.8.8.8 ping statistics —

3 packets transmitted, 0 received, 100% packet loss, time 2040ms

This is tcpdump from compute node

root@compute1:/# tcpdump -i ens34 icmp

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on ens34, link-type EN10MB (Ethernet), capture size 262144 bytes

17:30:00.258697 IP 10.0.2.106 > 8.8.8.8: ICMP echo request, id 41872, seq 0, length 64

17:30:01.259844 IP 10.0.2.106 > 8.8.8.8: ICMP echo request, id 41872, seq 1, length 64

So packets are going through provider interface -ens34. I think that is routing problem on compute node but i cannot find where it is.

Openstack charm with Juju and Maas lxd issues!

Hello,

We’re deploying openstack on 6 x machines via MAAS and Juju using openstack base charms. Each machine has 2 x 10gbe interfaces (en… | Read the rest of https://www.webhostingtalk.com/showthread.php?t=1838021&goto=newpost