etcd – Kubernetes API: Compare and update config map key

Etcd has a concept of Atomic Compare-and-Update by comparing the key’s value before executing an update. I’d like to use this feature for updating a ConfigMap in my Kubernetes cluster. I’d like to update the config map only if the existing config map data or a specific data key matches a certain value.

Example ConfigMap:

curl -X POST -H 'Content-Type: application/json' 
    -d '{"apiVersion": "v1", "kind": "ConfigMap", "metadata": {"name": "test"}, "data": {"foo": "1"}}' 
    http://localhost:8001/api/v1/namespaces/default/configmaps

I need to interact with K8S API or directly with K8S’s etcd directly if possible (is it?), and I don’t want to rely on resourceVersion. I’d like to depend on my own version which is actually the config map’s data key. How can I achieve such an atomic UPDATE (or DELETE) operation?

kubernetes – Schedule pod on a node and access pv on another node

I’m running a k3s cluster on RPi4, with heterogenous config (a node has a high capacity but slow hdd, another has a ssd drive, a third only has a sd card).

I have persistent volumes & claims of kind “local-path”, attached to nodes & pods depending on my needs.

I’m facing a situation where I need to schedule a pod on the node with no disk to process data stored in the node with the ssd disk (re-encode some video files to mp4 using ffmpeg, and as this is an expensive process I’d like to do this on an idle node and not slow the node running the ssd).

Is it possible to transparently mount a PV from a different node ? Do I need to use some nfs ? Is there a more evoluted type of volume that can be used in bare-metal RPi4 to do what I want ?

Looking at the docs didn’t help much (there is tons of different persistent volume type, with not many use-case described).

Thanks

kubernetes – kubectl logs error: You must be logged in to the server (the server has asked for the client to provide credentials

My k8s version v1.17.13
My certificate expired today , so I ran
kubeadm alpha certs renew all
systemctl restart kubelet
on all my master servers.
All the kubectl commands that I ran worked fine .. like
kubectl get nodes , kubectl scale , kubectl describe …
However , running kubectl logs gives the following error
error: You must be logged in to the server (the server has asked for the client to provide credentials

Any idea why …
I believe my ~/.kube/config is ok because I am able to run other kubectl commands. I deleted the kube-apiserver to force to restart .. but still same issue ..

May you please help me with this issue.
Thanks,

What Actually Is kubernetes?

A friend of mine yesterday, he is working as PHP… | Read the rest of https://www.webhostingtalk.com/showthread.php?t=1834150&goto=newpost

kubernetes – Connection to a private k8s cluster: failed to find any PEM

I have a Kubernetes cluster which is running in a private cloud. I want to run some commands from another VM but I receive this:

[root@runner-tmp ~]# kubectl get pods --kubeconfig local-cluster.yaml
error: tls: failed to find any PEM data in certificate input

My local-cluster.yaml:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://x.x.x.x:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    namespace: FSM
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

Do you have any idea where should I specify this PEM certificate and how can I generate it?

networking – Kubernetes Worker Node LoadBalance on Different DataCenters

Please suggest How Can i Use LoadBalancer for Two Different Data Center Worker Nodes

Example:

I have in placed two Kubernetes Worker nodes w01 and w02

w01: Data Center 01 : 192.168.01.1

w02: Data Center 01 : 192.168.02.2

The Question is

How can i Load Balances Between This two Subnet

If I use MetaLLb AS LoadBalance How can Configure the Config Layer 4?

As of if I use Kubernetes Service As LoadBalancer ?

It Point Me to One Set of IP Range which I do provide on Metallb Config

Tested:

Metallb Cconfig Range 192.168.01.10 -192.168.01.20 works only if in case pods running on 192.168.01.0/24 then pods running on 192.168.02.0/24 get not response

How can I configure this kind of issues?

Force kubernetes to use containerd when docker is installed

Kubelet is the process responsible for the on-the-Node container actions, and it has a set of command-line flags to tell it to use a remote container management provider (both containerd and cri-o are consumed the same way, AFAIK):

(Service)
ExecStart=/usr/local/bin/kubelet --container-runtime=remote --container-runtime-endpoint=unix:///var/run/dockershim.sock

(assuming your containerd is listening on the same dockershim.sock path)

The fine manual specifically says to ensure you don’t switch those flags with an existing Node registration, since it makes certain assumptions when creating the containers, so if you already have a Node that is using docker, ideally stop kubelet, blow away those containers, kubectl delete node $the_node_name and let kubelet re-register with the correct configuration

deployment – How to automatically deploy new docker images from dockerhub to kubernetes?

I’m looking for a CD solution for my k8s cluster. Now after pushing a commit with dev-* tag, dockerhub creates new image tagged as dev-latest. Here’s my deployment config:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-image-backend-local-deployment
  labels:
    app: my-image
    type: web
spec:
  replicas: 2
  minReadySeconds: 15
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 1
  selector:
    matchLabels:
      app: my-image
  template:
    metadata:
      labels:
        app: my-image
    spec:
      imagePullSecrets:
        - name: regcred
      containers:
      - name: backend
        image: cosmicfruit/my-image:dev-latest
        imagePullPolicy: IfNotPresent
        envFrom:
        - secretRef:
            name: my-image-backend-local-secret
        ports:
        - containerPort: 8000
        readinessProbe:
          httpGet:
             path: /flvby
             port: 8000
          initialDelaySeconds: 10
          periodSeconds: 5
      - name: celery
        image: cosmicfruit/my-image:dev-latest
        imagePullPolicy: IfNotPresent
        workingDir: /code
        command: ["/code/run/celery.sh"]
        envFrom:
        - secretRef:
            name: my-image-backend-local-secret
      - name: redis
        image: redis:latest
        imagePullPolicy: IfNotPresent

I want new images to be automatically deployed into the pods but cant find a relevant solution. I’ve already looked at fluxcd but it’s working with github not dockerhub and it’s sleazy.

kubernetes – Kubectl get pod -o yaml | Look for output in Original Source file rather full object element How To Get

I have created the simple static web pod using the following way

mkdir /etc/kubelet.d/
cat <<EOF >/etc/kubelet.d/static-web.yaml
apiVersion: v1
kind: Pod
metadata:
  name: static-web
  labels:
    role: myrole
spec:
  containers:
    - name: web
      image: nginx
      ports:
        - name: web
          containerPort: 80
          protocol: TCP
EOF

After creating
I am deleting the source file from /etc/kubelet.d/static-web.yaml. Now I wants to retrieve the yaml file from the running pod.
While I am trying to generate, I am getting yaml outout in 120 to 150 lines rather original source file Exam: original source file 10 to 15 the file what I am getting it is around 120 to 150 lines.
So can you help me to get exact as the source file like above and eliminate unnecessary object file

root@master-1:~# kubectl get pod static-web
NAME         READY   STATUS    RESTARTS   AGE
static-web   1/1     Running   0          20h

I have copied only few lines 
root@master-1:~# kubectl get pod static-web -o yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2020-12-28T10:08:26Z"
  labels:
    role: myrole
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          .: {}
          f:role: {}
      f:spec:
        f:containers:
          k:{"name":"web"}:
            .: {}
            f:image: {}
            f:imagePullPolicy: {}
            f:name: {}

kubernetes – podman failed to parse image

I am running podman 2.2.1 on Ubuntu 20.04 and I am trying to run a kube yaml file,

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-test
  labels: 
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
      - ports:
        - containerPort: 80

I am then running podman play kube ./podman-test.yaml

Which returns,
Error: error encountered while bringing up pod nginx-test-pod-0: Failed to parse image "": invalid reference format

I have tried changing the image to no avail.