rest – RESTful URLs for multiple resource in the same Microservice

No, its a pain in the arse to parse the ids out of the string afterwards. Which you often want to do for logging or routing.

Try and keep the depth and the syntax of your Urls constant


Now I have a pattern which will match everything

{type}/{operation}/{typeId}/{extra stuff}

I wont have to write a regex or code which goes:

if(firstPathNode == "group" and secondPathNode != "add","remove", "invites")
    typeId = secondPathNode

and I’ll never have to write

if(userId =="add")
   throw new Exception("you cant use that Id because it conflicts with a route")

How to know if Google Sheets IMPORTDATA, IMPORTFEED, IMPORTHTML or IMPORTXML functions are able to get data from a resource hosted on a website?

If the content is added dynamically (by using Javascript), it can’t be imported by using Google Sheets built-in functions. Also if the website webmaster have taken certain measures, this functions will not able to import the data.

To check if the content is added dynamically, using Chrome,

  1. Open the URL of the source data.
  2. Press F12 to open Chrome Developer Tools
  3. Press Control+Shift+P to open the Command Menu.
  4. Start typing javascript, select Disable JavaScript, and then press Enter to run the command. JavaScript is now disabled.

JavaScript will remain disabled in this tab so long as you have DevTools open.

Reload the page to see if the content that you want to import is shown, if it’s shown it could be imported by using Google Sheets built-in functions, otherwise it’s not possible but might be possible by using other means for doing web scraping.

According to Wikipedia,

Web scraping, web harvesting, or web data extraction is data scraping used for extracting data from websites.

The webmasters could use robots.txt file to block access to website. In such case the result will be #N/A Could not fetch url.

The webpage could be designed to return a special a custom message instead of the data.

IMPORTDATA, IMPORTFEED, IMPORTHTML and IMPORTXML are able to get content from resources hosted on websites that are:

  • Publicly available. This means that the resource doesn’t require authorization / to be logged in into any service to access it.
  • The content is “static”. This mean that if you open the resource using the view source code option of modern web browsers it will be displayed as plain text.
    • NOTE: The Chrome’s Inspect tool shows the parsed DOM; in other works the actual structure/content of the web page which could be dynamically modified by JavaScript code or browser extensions/plugins.
  • The content has the appropriated structure.
    • IMPORTDATA works with structured content as csv or tsv doesn’t matter of the file extension of the resource.
    • IMPORTFEED works with marked up content as ATOM/RSS
    • IMPORTHTML works with marked up content as HTML that includes properly markedup list or tables.
    • IMPORTXML works with marked up content as XML or any of its variants like XHTML.
  • Google servers are not blocked by means of robots.txt or the user agent.

On W3C Markup Validator there are several tools to checkout is the resources had been properly marked up.

Regarding CSV check out Are there known services to validate CSV files

It’s worth to note that the spreadsheet

  • should have enough room for the imported content; Google Sheets has a 5 million cell limit by spreadsheet, according to this post a columns limit of 18278, and a 50 thousand characters as cell content even as a value or formula.
  • it doesn’t handle well large in-cell content; the “limit” depends on the user screen size and resolution as now it’s possible to zoom in/out.



The following question is about a different result, #N/A Could not fetch url

availability groups – The Windows Server Failover Clustering (WSFC) resource control API returned error code 19

Here are the details of that error code (19):


19 (0x13)

The media is write protected.

You need to consult the cluster log (use Get-ClusterLog to get that) for additional details about what writes failed within the cluster operation being performed. Check that out and update your question with any errors you see.

That being said, combined with this symptom:

…if you reboot second node, database wont up.

You might be experiencing disk problems. Check the Windows system event log and SQL Server error log for messages related to failed writes or corruption.

oauth2 – OIDC Should I authenticate as the resource owner or machine when accessing an centralised authorization service?

We have an existing user authentication service based on casbin ( which implements RBAC and holds fine grained user permissions. We are looking to expose this user authentication service as a webservice for other microservices in our organization to consume.

At the same time, we are also looking to upgrade our systems to use OIDC. The users will send HTTP requests with access tokens to the the microservice APIs which will validate the tokens with an authorization server.

Provided the user is authorized to access the API we will need to check the fine grained permissions. Should we authorize to the fine-grained user permission webservice using the access token provided by the user to the microservice, or should our microservices have their own set of client credentials to check the fine grained authorization service?

google cloud platform – Terraform on gcloud: serviceaccounts is forbidden: User “system:anonymous” cannot create resource “serviceaccounts”

I am trying to write terraform code for bootstrapping a GKE cluster (with RBAC) on Google Cloud.
The GKE cluster successfully created, but I want to create a service account as wel which I can reuse for my later kubernetes provider configuration.
This means that I need to use the kubernetes provider in my submodule to temporarily create the kubernetes_service_account needed for the rest of my terraform code.

resource "google_container_cluster" "k8s_autopilot_cluster" { ... }

provider kubernetes {
    alias = "k8s_gcloud_temp"
    cluster_ca_certificate  = base64decode(google_container_cluster.k8s_autopilot_cluster.master_auth.0.cluster_ca_certificate)
    host                    = google_container_cluster.k8s_autopilot_cluster.endpoint
    client_certificate      = base64decode(google_container_cluster.k8s_autopilot_cluster.master_auth.0.client_certificate)
    client_key              = base64decode(google_container_cluster.k8s_autopilot_cluster.master_auth.0.client_key)

resource "kubernetes_service_account" "terraform_k8s_sa" {
    provider = kubernetes.k8s_gcloud_temp
    metadata {
        namespace = "kube-system"
        name = "terraform-k8s-sa"

  automount_service_account_token = false

So my cluster is created successfully, but the creation of my kubernetes_service_account always fails with Error: serviceaccounts is forbidden: User "system:anonymous" cannot create resource "serviceaccounts" in API group "" in the namespace "kube-system".

Any idea why I cannot use master_auth and what I should use instead?

resource limiting – kernel does not support Block I/O weight error, Docker

I need to set block IO limit to a docker container.

I decided to try out this docker feature, but I am getting the following:

(base) me@ubuntu:~/dev/ws3$ docker run -it --name cont_B --blkio-weight 300 busybox
WARNING: Your kernel does not support Block I/O weight or the cgroup is not mounted. Weight discarded.
/ # exit

what does this mean? it is a warning, but it says that it is not going to do what I want? how can I remediate this?

ruby – Rails: What route to let show one resource but submit to another?

I’m sketching out a data model for a Rails app that lets users submit answers to technical questions. For example, the prompt might be:

Write a SQL query to determine the number of unique visitors last week

My models would look like this:

class User < ApplicationRecord
  has_many :submissions

class Submission < ApplicationRecord
  belongs_to :user
  belongs_to :question

class Question < ApplicationRecord
  has_many :submissions

I’d like a user to be able to visit a page that will display:

  1. The question’s prompt
  2. The user’s previous submissions for this question
  3. A form that lets the user create a new submission for this question

How should I set up my routes given these requirements? I’m pretty lost but am thinking I could use nested resources in my routes.rb:

resources :questions do  
  resources :users do
    resources :submissions

If I did that, a specific user submitting an answer to a specific question could be:

POST to /questions/:question_id/users/:user_id/submissions

Then my controller’s create action could look like this:

class SubmissionController < ApplicationController
  def create
    @question = Question.find(params(:question_id))
    @user = User.find(params(:user_id))
    Submission.create(params(:submission), user: current_user, question: @question)
    @submissions = Submission.where(user: @user, question: @question)
    redirect_to (@question, @user)

Is there a more RESTful / Rails-ey way to go about the above? Thanks in advance for any help you can offer!

Resource Filter by AddonsLab | NulledTeam UnderGround

If the filter URL is modified and the values of multiple-selection fields are changed intentionally (e.g. with an attempt to cause an error in the website or execute SQL injection), there would be a PHP error logged in the admin panel. However, this would not cause any visible errors or unexpected SQL queries. With this release, we have made sure these invalid values don’t make their way to built-in XenForo functions that expect only valid values.

This version also fixes a compatibility…

.(tagsToTranslate)xenforo nulled(t)nulled(t)whmcs nulled(t)vbulletin nulled(t)whmcs themes nulled(t)whmcs modules nulled(t)whmcs template nulled(t)nulled forum(t)best nulled scripts site(t)whmcs templates nulled(t)phpfox nulled(t)nulled script(t)xenforo 2 nulled(t)best nulled sites(t)xenforo nulled themes(t)whmcs nulled(t)nulled scripts(t)vbulletin 5 nulled(t)whmcs addons nulled(t)arrowchat nulled(t)cs-cart nulled(t)xfilesharing pro nulled(t)nulled script forum(t)cs cart nulled(t)nulled sites(t)blesta nulled

node.js – google compute engine instance creation error. Resource is not ready

Sometimes when creating a Google Compute Engine instance we get an error like:

GaxiosError: The resource 'projects/zoocorder/zones/us-central1-c/disks/job1617338480855vm' is not ready
    at Gaxios._request (/workspace/node_modules/gaxios/build/src/gaxios.js:86:23)
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (internal/process/task_queues.js:97:5)
    at async Compute.requestAsync (/workspace/node_modules/google-auth-library/build/src/auth/
) { 

I’m not sure where the error is because its cut off by the logs viewer. Im requiring Google APIs and using compute engine to create a disk and an image with a custom docker image.

const {google} = require('googleapis');
const compute = google.compute('v1');

  compute.disks.insert(request, function(err, response) {

  var instanceCreateRequest = {
    // Project ID for this request.
    project: 'zoocorder',
    // The name of the zone for this request.
    zone: 'us-central1-c',

    resource: {
      "canIpForward": false,
      "deletionProtection": false,
      "description": "",
      "disks": (
          "autoDelete": true,
          "boot": true,
          "deviceName": instanceId ,
          "guestOsFeatures": (
              "type": "UEFI_COMPATIBLE"
              "type": "SEV_CAPABLE"
              "type": "VIRTIO_SCSI_MULTIQUEUE"
          "interface": "SCSI",
          "mode": "READ_WRITE",
          "source": "projects/(project)/zones/us-central1-c/disks/" + vmid,
          "type": "PERSISTENT"
      "displayDevice": {
        "enableDisplay": false
      "labels": {
        "container-vm": "cos-stable-81-12871-1174-0"
      "machineType": "projects/(project)/zones/us-central1-c/machineTypes/n1-standard-4",
      "metadata": {
        "items": (
            "key": "gce-container-declaration",
            "value": "spec:n  containers:n    - name: " + instanceId + "n      image: '(image)'n      stdin: falsen      tty: falsen  restartPolicy: Alwaysnn# This container declaration format is not public API and may change without notice. Pleasen# use gcloud command-line tool or Google Cloud Console to run Containers on Google Compute Engine."
            "key": "google-logging-enabled",
            "value": "true"
      "name": instanceId,
      "networkInterfaces": (
          "accessConfigs": (
              "name": "External NAT",
              "networkTier": "PREMIUM",
              "type": "ONE_TO_ONE_NAT"
          "network": "projects/zoocorder/global/networks/default",
          "subnetwork": "projects/zoocorder/regions/us-central1/subnetworks/default"
      "reservationAffinity": {
        "consumeReservationType": "ANY_RESERVATION"
      "scheduling": {
        "automaticRestart": true,
        "onHostMaintenance": "MIGRATE",
        "preemptible": false
      "shieldedInstanceConfig": {
        "enableIntegrityMonitoring": true,
        "enableSecureBoot": false,
        "enableVtpm": true
      "serviceAccounts": (
          "email": "(email)",
          "scopes": (
      "tags": {
        "items": (

    auth: authClient,

  compute.instances.insert(instanceCreateRequest, function(err, response) {
    if (err) {

    console.log("err", err);

    listInstances(instanceId, authClient, mdata);

    console.log(JSON.stringify(response, null, 2));


Checked the Disks list and the Disk was created but it didn’t get correctly assigned the instance sometimes. Is there a way to wait until the disk is ready before assigning the instance? Also It seems like I’m not seeing any mention of the error in err or response, so I don’t know when to retry.

linux – Applying systemd control group resource limits automatically to specific user applications in a gnome-shell session

Having seen that GNOME now launches apps under systemd scopes I’ve been looking at a way to get systemd to apply some cgroup resource and memory limits to my browser.

I want to apply a MemoryMax and CPUShare to all app-gnome-firefox-*.scope instances per systemd.resource-control.

But GNOME isn’t launching firefox with the instantiated unit format app-gnome-firefox-@.scope so I don’t know how to make a systemd unit file that will apply automatically to all app-gnome-firefox-*.scope instances.

I can manually apply the resource limits to an instance with systemctl set-property --user app-gnome-firefox-92450.scope (for example) once the unit starts, but that’s a pain.

Is there any way to inject properties for transient scopes with pattern matching for names?

This isn’t really gnome-shell specific; it applies just as well to a user terminal session that invokes a command with systemd-run --user --scope.


Firefox is definitely launched under a systemd scope, and it gets its own cgroup:

$ systemctl --user status app-gnome-firefox-92450.scope
● app-gnome-firefox-92450.scope - Application launched by gnome-shell
     Loaded: loaded (/run/user/1000/systemd/transient/app-gnome-firefox-92450.scope; transient)
  Transient: yes
     Active: active (running) since Wed 2021-03-31 09:44:30 AWST; 32min ago
      Tasks: 567 (limit: 38071)
     Memory: 2.1G
        CPU: 5min 39.138s
     CGroup: /user.slice/user-1000.slice/user@1000.service/app-gnome-firefox-92450.scope
             ├─92450 /usr/lib64/firefox/firefox

Verified by

$ systemd-cgls --user-unit app-gnome-firefox-92450.scope
Unit app-gnome-firefox-92450.scope (/user.slice/user-1000.slice/user@1000.service/app-gnome-firefox-92450.scope):
├─92450 /usr/lib64/firefox/firefox


$ ls -d /sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/app-gnome-firefox-*

I can apply a MemoryMax (cgroup v2 constraint memory.max) to an already-running instance with systemctl set-property and it takes effect:

$ systemctl set-property --user app-gnome-firefox-98883.scope MemoryMax=5G
$ systemctl show --user app-gnome-firefox-98883.scope |grep ^MemoryMax
$ cat /sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/app-gnome-firefox-*/memory.max

It definitely takes effect – setting a low MemoryMax like 100M causes the firefox scope to OOM, as seen in journalctl --user -u app-gnome-firefox-98883.scope.

The trouble is that I can’t work out how to apply systemd.resource-control rules automatically for new instances of the app automatically.

I’ve tried creating a .config/systemd/user/app-gnome-firefox-@.scope containing

MemoryMax = 5G

but it appears to have no effect.

systemd-analyze verify chokes on it rather unhelpfully:

$ systemd-analyze  verify --user .config/systemd/user/app-gnome-firefox-@.scope 
Failed to load unit file /home/craig/.config/systemd/user/app-gnome-firefox-@i.scope: Invalid argument

If I use systemctl set-property --user app-gnome-firefox-92450.scope on a running instance and systemctl --user show app-gnome-firefox-92450.scope I see the drop-in files at:


It has Names containing the pid, so that can’t be matched easily:


and I’m kind of stumped. Advice would be greatly appreciated, hopefully not “gnome-shell is doing it wrong, patch it” advice. Some draft systemd docs suggest it’s using one of the accepted patterns.


The only workaround I see so far is to launch the firefox instance with systemd-run myself:

systemd-run --user --scope -u firefox.scope -p 'MemoryMax=5G' -p 'CPUQuota=80%' /usr/lib64/firefox/firefox

and let that be the control process. But it looks like this isolates the firefox control channel in some manner that prevents firefox processes launched by other apps or the desktop session from then talking to the cgroup-scoped firefox, resulting in

Firefox is already running, but is not responding. To use Firefox, you must first close the existing Firefox process, restart your device, or use a different profile.

Edit: firefox remoting when launched manually via systemd-run is fixed by setting MOZ_DBUS_REMOTE in the environment both for my user session and as a -E MOZ_DBUS_REMOTE=1 option to systemd-run. It’s probably because I’m using Wayland.

Still a clumsy workaround – it should surely be possible to apply resource control rules to slices via .config/systemd/user ?