Cache configuration for PostgreSQL/TimescaleDB VM on ZFS with proxmox

I have a proxmox cluster of a single node and I want to start a new VM with a PostgreSQL and TimescaleDB and after a lot of reading about how to tune ZFS volumes for this purpose I still have some doubts with the cache options. We have 3 caches: The proxmox one (ARC), the linux vm one (LRU) and the PostgreSQL one (clock sweep); in order from further to nearer to the DBs.

I have read a lot of information, some of them contradictory, so I don’t know if this is true but it seems that the PG cache isn’t designed in the same way as a kernel cache where it tries to catch it everything and evict only when there isn’t enough room to continue caching. In fact seems to be more like a buffer for the data that is being processed at the moment and not a long term cache. Indeed, it’s called shared-buffers. I guess that it’s why the doc doesn’t recommend to set shared_buffers to a high % of the available ram like ARC does, but somewhere between 25 and 50%. Seems that the real PG cache is the kernel one and not shared_buffers.

Taking this into consideration there are some possible configurations to take into account:

  1. Create a VM with a moderated ammount of RAM (let’s say 12GB) and set shared_buffers to 10GB. Trying with that: 1) Have a good amount of memory to act as buffer to the ongoing queries. 2) Stifle the VM RAM to not use its cache, that with its LRU configuration should be the worst one and instead use ARC one with better weights. The problem with this configuration may come from that the cache it’s ouside the VM and could reduce the performance instead of improving it. Also not sure about how many room I have to left over the shared_buffers size to let run the VM OS and the other DB processes.
  2. Create a VM with a high amount of RAM (let’s say 48 GB) and keep shared_buffers in the same 10GB. Also zfs set primary cache to metadata. This way the cache will be nearer the DB and inside the VM but with a worst logic. Seems that LRU is kinda bad for DB.
  3. Create a VM with a high amount of ram and primarycache=all. I think that this will be a bad thing because: 1) VM and proxmox chaches will compete for resources. 2) Cache duplication.

In order to give some context, the node has 64GB of total RAM and the PG/timescaleDB will be the more demanding/priority application running on it.

So, are my initial assumptions correct? Which configuration will work better? What would you change?

Best regards, thanks for your time,

Héctor

PD: Sorry, I have posted this before on superuser but I think that here fits better.

version control – How do I spawn additional instances with the same configuration?

I have the following scenario:

I will develop an application based on Drupal.
Everything will be developed on a local instance.
Composer files, configuration and so on will be kept in a git repository.
The repository will be hosted on GitLab.
I will use the CI/CD features over there.
I will need one test instance.

The project starts with one production instance.
In the future, there will be additional production instances.
They all should have exactly the same configuration, but different content.

I already did something like this, where I copied the initial database from my local install to my test and production server.
Afterwards, my CI/CD scripts could load the configuration from git to the different instances.
But there, I only hat one production server.

If I remember correctly, configuration is bound to an ID of the initial database.
So, I can’t just set up a new instance and load the configuration from the old one.

Since they all will have exactly the same configuration, I would love to have a develop branch for delivering to the test server and a master branch to deliver to all production servers. If there are better solutions, I’m also open for them.

What would be the best workflow to add additional instances in the future that use the same config and can receive updates from the same CI/CD?

Would this workflow be completely incompatible in the case that, somewhere in the future, one instance would start getting additional configuration?

KVM virtualized firewall NIC configuration + iptables

I’m currently considering setting up a virtualized firewall on top of KVM. This virtualized firewall can be Pfsense, Opnsense, Ipfire, whatever basically, haven’t decided yet.

What I currently was thinking is: I have a physical machine with 2 physical NIC’s.
So the idea is to create one bridge per NIC, let’s call them:

  • br0 on eth0 (WAN)
  • br1 on eth1 (LAN)

The VM-firewall will have one nic assigned to br0 and one to br1.

Now I’m struggling a bit what would be the best configuration.
I’ve seen examples where people setup networking in such a way that the WAN interface of the KVM host does not get an IP and only can access the internet after going through the virtualized firewall, so the host is configured as if it’s also a system on the LAN-side only.

I’m wondering if this is really the way to go? Risk I see here is that if you do this, and the VM-firewall goes down, you can’t even connect to the internet anymore from the KVM host. I don’t know if I want to run this risk.

On the other hand, the option I see is to allow the KMV host to also directly connect to the internet.
The only reason it will use this connection will probably be to get updates for the OS (basically your apt update).
So it will use its own DNS config (e.g. the google DNS servers), not the DNS of the virtual firewall.

So my first question: what would be the best option out of those 2?

If it would be the first option, how would you do this?

  • Basically set the eth0 interface to manual mode, so it doesn’t get an IP?
  • Would I then also need to still allow IP-forwarding on this host?
  • would any iptables rules in the forwarding chain be necessary?
  • would you expect any specific iptables rules for br0? e.g. in the input chain? Since br0 will only be used by the WAN interface of the VM firewall, I guess on the host level you don’t need any br0 rules?
  • Would you still expect any iptables input rules for eth0?

If the best option is option 2: allowing the KVM host still direct access to the internet, how would this best be done?

  • I guess assigning DHCP to eth0?
  • Would I then also need to still allow IP-forwarding on this host?
  • would any iptables rules in the forwarding chain be necessary?
  • what kind of iptables rules would you expect on eth0/br0? I would think block all for eth0 in the input chain. What about the br0 interface? No rules in the INPUT chain since the WAN interface of the VM firewall is linked to br0, so this firewall handles all input restrictions?

I appreciate any feedback you can give.

version control – How do I spawn additional Drupal instances with the same configuration?

I have the following scenario:

I will develop an application based on Drupal.
Everything will be developed on a local instance.
Composer files, configuration and so on will be kept in a git repository.
The repository will be hosted on GitLab.
I will use the CI/CD features over there.
I will need one test instance.

The project starts with one production instance.
In the future, there will be additional production instances.
They all should have exactly the same configuration, but different content.

I already did something like this, where I copied the initial database from my local install to my test and production server.
Afterwards, my CI/CD scripts could load the configuration from git to the different instances.
But there, I only hat one production server.

If I remember correctly, configuration is bound to an ID of the initial database.
So, I can’t just set up a new instance and load the configuration from the old one.

Since they all will have exactly the same configuration, I would love to have a develop branch for delivering to the test server and a master branch to deliver to all production servers. If there are better solutions, I’m also open for them.

What would be the best workflow to add additional instances in the future that use the same config and can receive updates from the same CI/CD?

Would this workflow be completely incompatible in the case that, somewhere in the future, one instance would start getting additional configuration?

9 – Getting error while exporting configuration

Trying to export config using Drupal console command.

drupal ce

Error

Call to undefined function DrupalConsoleCommandConfigconfig_get_config_directory() in DrupalConsoleCommandConfigExportCommand->interact() (line 86 of vendor/drupal/console/src/Command/Config/ExportCommand.php)

I have defined config directory in settings.php. which is config/sync.

Installed drupal 9.0.9 Using composer.

boot – How do I get a hyper-v Windows VM to run installation and configuration scripts no later than the first time a user logs in?

I want to set up a Windows 10 VM on my workstation from an available VHD from Microsoft.
I’ve got a PSH script that will download the VHD and create the VM in Hyper-V. Check.

Before I use the VM, I want to make sure my favorite development applications are installed and some required network and security configurations and applications are installed. The latter, at least, require certain batch and PowerShell scripts to run.

I can mount the VHD and copy the scripts to the VHD for the VM from PowerShell. Check.

When I launch the VM I can manually run the scripts in the VM and they install and configure things, when run with the right administrative privilege. That’s OK, but not ideal. I want this to be fully automatic, so when I log into the VM for the first time then the applications and configurations are either already in place or launch automagically in the background and everything is soon in place.

Examples of the customizations to the base image:

  • Install the latest Chrome (install without user input)
  • Install a particular VPN we use, which requires a bunch of PowerShell
    configuration changes in administrative mode before the installer
    will work correctly
  • Install WinMerge

I have not worked deeply with automating the configuration of Windows workstations in ages nor with Hyper-V VMs, so I’m not even sure if the approach is to put scripts in a magic startup folder or to set registry keys in the VM or some other VM setup hack that is quite unknown to me.

Any ideas how to pull this off with PowerShell and maybe a little batch?

architecture – What are some of the best tools to write, manage, and validate software configuration?

Let’s say you have a project that relies on user-defined configs which have a lot of parameters and sub-parameters. What’s the best tool to manage this kind of complexity so that the config schemas are:

  • easy to define by the developer
  • easy to set by the end-user to make changes
  • Avoids duplication of any parameters (happens easily when there are too many configs)
  • easy to validate when changes are made

I’m aware of json-schema and it does a pretty decent job. I was reading on jsonnet and sounds like it’ll be better than json-schema. I have also researched a bit on marshmallow and pydantic but I’m not sure these are the tools I’m looking for.

Is there anything out there that is latest and greatest and the most intelligent that I’m unaware of? The project is in python if that makes any difference.

PS: Just to clarify, I’m not looking for config management tools like chef, puppet, or salt.

ajax – OpenModalDialogCommand without site configuration permission

I have a custom module in Drupal 8.9.9 to open a form in a modal dialog with OpenModalDialogCommand.

The custom_modal.routing.yml file has:

custom_modal.open_modal_form:
  path: '/modal_form/{title}/{vocabulary}'
  defaults:
    _title: 'Modal Form'
    _controller: 'Drupalcustom_modalControllerCustomModalController::openModalForm'
  requirements:
    _access: 'TRUE'
  options:
    parameters:
      title:
        type: String
      vocabulary:
        type: String

And the CustomModalController.php:

  public function openModalForm($title, $vocabulary) {
    $response = new AjaxResponse();

    $values = array('vid' => $vocabulary);

    $term = Drupal::entityTypeManager()
      ->getStorage('taxonomy_term')
      ->create($values);

    $form = Drupal::entityTypeManager()
      ->getFormObject('taxonomy_term', 'default')
      ->setEntity($term);

    $term_form =  Drupal::formBuilder()->getForm($form);

    $response->addCommand(new OpenModalDialogCommand($title, $term_form, ('width' => '800')));

    return $response;
  }

My theme has the following dependencies:

global-scripts:
  dependencies:
    - core/jquery
    - core/drupal.ajax
    - core/drupal
    - core/drupalSettings
    - core/jquery.once
    - core/drupal.dialog.ajax

And I have even added the following to my custom_modal.module:

function custom_modal_form_alter(&$form, &$form_state, $form_id, $no_js_use = FALSE) {
  $form('#attached')('library')() = 'core/drupal.dialog.ajax';
  $form('#attached')('library')() = 'core/drupal.ajax';
}

Nevertheless, whereas everything works fine when I use the site administrator, when using a simple authenticated user the modal dialog does not open and I get the following error:

There was an HTTP AJAX error.
HTTP Result Code: 403
Below is the information on debugging.
Path: /modal_form/Create/style
StatusText: Forbidden
ResponseText: {"message": "The administer site configuration permission is required."}"

Any ideas? Thanks in advance

complexity theory – Configuration of a space bounded turing machine

A configuration of a turing machine is defined as the following:

an ordered triple (x, q, k) ∈ Σ∗ × K × N, where x denotes the string
on the tape, q denotes the machine’s current state, and k denotes the
position of the machine on the tape

I have read in a paper that a space bounded non-deterministic turing machine (NSPACE), has at most 2^(d*n) configurations on an input of length n, where d is a constant, how do we know that this is true? what is d? and how can we prove it?

system.xml – How to create a dropdown option as a configuration section to update the options from backend and show it from product details page?

I want to add a select box to the product details page.
Which has several options but this is not a product attribute. It should configure as a global variable.
Admin should able to add new options too from the backend.

I tried the system.xml way and I couldn’t update the new option value for the select box I created.

Can someone help me to do this, please?