oracle – What could be the cause of invalidation of “sys” views?

I had never experience anything like this but one of the scripts containing

select count(*) from user_tables where table_name = 'XXX';

started to throw

PL/SQL: ORA-04063: view “SYS.USER_TABLES” has errors

I use visual tool to look into it – and here it is

enter image description here

And I see many views are invalidated. What is the possible cause for this? I’ve never seen sys.views invalid.

What I did is ran this to create alter scripts

select 'alter view SYS.' || object_name || ' compile;' from user_objects where object_type = 'VIEW' and  status = 'INVALID';

Created 707 rows. Ran them, still 202 remains problematic. Run again, still 202.

And here is error sample

SQL Error: ORA-00604: error occurred at recursive SQL level 1
ORA-04063: view "SYS.DBA_OBJECT_TABLES" has errors
00604. 00000 -  "error occurred at recursive SQL level %s"
*Cause:    An error occurred while processing a recursive SQL statement
           (a statement applying to internal dictionary tables).
*Action:   If the situation described in the next error on the stack
           can be corrected, do so; otherwise contact Oracle Support.

Addition Info

I started to look into user_table definitions. And I tested tables in the FROM.

Finding – select * from sys.deferred_stg$ – table does not exist

magento2 – Update special prices: Problems with cache invalidation

i have strange problem/issue/bug/feature … whatever…

Case:
Every day on defined times a module will import a few special prices for some products.
So in this case the cache and the index has to be rebuild/flushed for each products which has to be updated.

How is it implemented yet:
To improve the performance of the import and reduce the load on the database we use the MagentoCatalogModelProductAction class to update the prices for the products:

/** @var MagentoCatalogModelProductAction $productAction */
$productAction->updateAttributes(($id), (
  'special_price' => 123.45,
  'special_from_date' => '(...)'
), 0);

Lets get a little bit deeper into Magento and the indexer:
There are a few triggers in the database, which listen on the EAV-Tables.
So if there is an change on any table (in this case on catalog_product_entity_decimal) there will be an insert into table catalog_product_price_cl which triggers a reindex of the specific product.
Thats pretty cool 🙂

BUT:
It seems to be that the parent product of a configurable product will not be reindexed.
The customer (shop owner) also reported, that this issue also happens on a normal simple product (without a parent).

So I added an additional trigger to reindex the product and it’s parents:

/** @var MagentoConfigurableProductModelProductTypeConfigurable $configurableTypeInstance */
/** @var MagentoGroupedProductModelProductTypeGrouped $groupedTypeInstance */
/** @var MagentoCatalogModelIndexerProductFull $indexer */
$indexer->executeList(array_unique(array_merge(
   ($product->getId()),
   $configurableTypeInstance->getParentIdsByChild($product->getId()),
   $groupedTypeInstance->getParentIdsByChild($product->getId()),
   $product->getTypeInstance()->getParentIdsByChild($product->getId())
)));

But this will also not solve the problem, that the prices will not correctly displayed in the frontend. After checking the values in the database I noticed that the prices in the indexes are correct. (I am not sure if the extra-index which I added, is still necessary)

So I checked the installed Varnish-Cache (full_page cache). The Varnish-Cache is also valid. (adding a cache-buster to the URL will get a fresh page …. with the outdated special price 🙁 )

I also checked the Block-HTML & Collection Cache. Also valid. (Cleared it, the price is still outdated.)

Now I have no idea anymore. And this is the reason why i coming up to you.
What am I doing wrong?

I look forward for your answers!

computer architecture – Memory Invalidation and Misses

Assume this particular architecture of a machine. Say we have 4 processors and each processor has its private L1 cache and shared L2 cache. Now if we write to an address in one of the private cache’s of the L1 cache then we Invalidate the blocks in the other private cache which contains the same address. Say P0(processor 0) reads an address 100 and so the block containing it say B0 gets stored in the private cache of P0 and as well as in the shared L2 cache. Now say P2 writes to the location 100. So we need to invalidate the block B0 from the private cache of P0. Now if P0 wants to read from address 100 again it will suffer a miss in its private cache, but will it get HIT or MISS in the L2 cache?

I think it will get HIT. Can anyone confirm.

caching – How to get cache tags invalidation working with Varnish Purger

On my server, I’ve got Varnish 5.2 working, so far so good. However, any changes made on the site are not shown to anonymous visitors, until the cache expires.

I installed the purge module together with the Varnish Purge module to get cache tags invalidation going. Then added a purger there, and another, and another, but no matter which settings I try, the requests keep on being served from cache.

Obviously, I’m doing something wrong, but what?

The contents of /etc/varnish/usr.vcl:

vcl 4.0;

backend default {
    .host = "127.0.0.1";
    .port = "8080";
}

acl purge {
"127.0.0.1";
}

# Respond to incoming requests.
sub vcl_recv {
# Add an X-Forwarded-For header with the client IP address.
if (req.restarts == 0) {
if (req.http.X-Forwarded-For) {
set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip;
}
else {
set req.http.X-Forwarded-For = client.ip;
}
}
# Only allow PURGE requests from IP addresses in the 'purge' ACL.
if (req.method == "PURGE") {
if (!client.ip ~ purge) {
return (synth(405, "Not allowed."));
}
return (hash);
}
# Only allow BAN requests from IP addresses in the 'purge' ACL.
if (req.method == "BAN") {
# Same ACL check as above:
if (!client.ip ~ purge) {
return (synth(403, "Not allowed."));
}
# Logic for the ban, using the Cache-Tags header. For more info
# see https://github.com/geerlingguy/drupal-vm/issues/397.
if (req.http.Cache-Tags) {
ban("obj.http.Cache-Tags ~ " + req.http.Cache-Tags);
}
else {
return (synth(403, "Cache-Tags header missing."));
}
# Throw a synthetic page so the request won't go to the backend.
return (synth(200, "Ban added."));
}
if (req.method == "URIBAN") {
    ban("req.http.host == " + req.http.host + " && req.url == " + req.url);
    # Throw a synthetic page so the request won't go to the backend.
    return (synth(200, "Ban added."));
  }
# Only cache GET and HEAD requests (pass through POST requests).
if (req.method != "GET" && req.method != "HEAD") {
return (pass);
}
# Pass through any administrative or AJAX-related paths.
if (req.url ~ "^/status.php$" ||
req.url ~ "^/update.php$" ||
req.url ~ "^/admin$" ||
req.url ~ "^/admin/.*$" ||
req.url ~ "^/flag/.*$" ||
req.url ~ "^.*/ajax/.*$" ||
req.url ~ "^.*/ahah/.*$") {
return (pass);
}

# Removing cookies for static content so Varnish caches these files.
if (req.url ~ "(?i).(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(?.*)?$") {
unset req.http.Cookie;
}


if (req.http.Cookie) {

    set req.http.Cookie = ";" + req.http.Cookie;
    set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";");
    set req.http.Cookie = regsuball(req.http.Cookie, ";(SESS(a-z0-9)+|SSESS(a-z0-9)+|NO_CACHE)=", "; 1=");
    set req.http.Cookie = regsuball(req.http.Cookie, ";(^ )(^;)*", "");
    set req.http.Cookie = regsuball(req.http.Cookie, "^(; )+|(; )+$", "");

    if (req.http.Cookie == "") {
      unset req.http.Cookie;
}
    else {
      return (pass);
    }
}


}
# Set a header to track a cache HITs and MISSes.
sub vcl_deliver {
# Remove ban-lurker friendly custom headers when delivering to client.
unset resp.http.X-Url;
unset resp.http.X-Host;
# Comment these for easier Drupal cache tag debugging in development.
#unset resp.http.Cache-Tags;
#unset resp.http.X-Drupal-Cache-Contexts;
if (obj.hits > 0) {
set resp.http.Cache-Tags = "HIT";
}
else {
set resp.http.Cache-Tags = "MISS";
}
}
# Instruct Varnish what to do in the case of certain backend responses (beresp).
sub vcl_backend_response {
# Set ban-lurker friendly custom headers.
set beresp.http.X-Url = bereq.url;
set beresp.http.X-Host = bereq.http.host;
# Cache 404s, 301s, at 500s with a short lifetime to protect the backend.
if (beresp.status == 404 || beresp.status == 301 || beresp.status == 500) {
set beresp.ttl = 10m;
}
# Don't allow static files to set cookies.
# (?i) denotes case insensitive in PCRE (perl compatible regular expressions).
# This list of extensions appears twice, once here and again in vcl_recv so
# make sure you edit both and keep them equal.

if (bereq.url ~ "(?i).(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(?.*)?$") {
unset beresp.http.set-cookie;
}
# Allow items to remain in cache up to 6 hours past their cache expiration.
set beresp.grace = 6h;
}

The contents of the nginx server configuration:

server {
    listen 443 ssl http2;
    server_name test.example.com;
    port_in_redirect off;

    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

    location / {
      proxy_pass http://127.0.0.1:6081;
      proxy_set_header Host $http_host;
      proxy_set_header X-Forwarded-Host $http_host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto https;
      proxy_set_header HTTPS "on";
      proxy_set_header If-Modified-Since $http_if_modified_since;
      proxy_buffering on;
      proxy_buffer_size   128k;
      proxy_buffers   4 256k;
      proxy_busy_buffers_size   256k;
    }
}

server {
   listen 8080;
   server_name test.example.com;
   root /home/example/domains/test/public_html/web;
   index index.php index.html index.htm index.nginx-debian.html;
   port_in_redirect off;

   location / {
      try_files $uri $uri/ /index.php?$query_string;
   }

   location ~ .php$ {
      include snippets/fastcgi-php.conf;
      fastcgi_pass 127.0.0.1:9000;
   }
}

server {
    listen 80;
    if ($host = test.example.com) {
        return 301 https://$host$request_uri;
    }
    server_name test.example.com;
    return 404;
}

object oriented – Using for_each instead of iterators to avoid iterator invalidation

I am writing a simple custom (special purpose) container and would like to allow for iteration over each element, however, avoid using iterators due to the problem of iterator invalidation.

Instead of providing a pair of iterators (via begin() and end()) I was thinking to provide a for_each method that iterates over the elements and passes them to a functor. The method would increment a counter on entry and decrement it on exit. If the counter is non-null, every time a method that would modify the container (and cause iterator invalidation) is called, it would return early, resulting in a no-op. See the following for a simple example

#include <cstdlib>
#include <cstdio>
#include <cstring>
#include <cassert>


struct FloatArray {

    float *data = nullptr;
    size_t count = 0;
    size_t niter = 0;


    float *begin() { return data; }
    float *end() { return data + count; }

    template<typename Fn>
    void
    for_each(Fn &&fn)
    {
        niter++;
        for (size_t k = 0; k < count; k++) {
            fn(data(k));
        }
        niter--;
    }
    
    void
    push_back(float f)
    {
        if (niter) {
            /* Return early if the container is being iterated over. */
            return;
        }
        float *new_data = new float(count + 1);
        memcpy(new_data, data, sizeof(*data) * count);
        if (data) {
            delete () data;
        }
        new_data(count) = f;
        data = new_data;
        count++;
    }
};


int
main(int argc, char **argv)
{
    FloatArray arr;

    arr.push_back(1.0f);
    arr.push_back(2.0f);
    arr.push_back(3.0f);

    arr.for_each(
        (&arr)(float f)
        { 
            if (f == 1.0f) {
                arr.push_back(42.0f); // no-op
            }
            printf("%fn", f); 
        }
    );
    for (float f : arr) {
        if (f == 1.0f) {
            arr.push_back(42.0f); // Undefined behaviour
        }
        printf("%fn", f);         
    }

    arr.push_back(4.0);
    return 0;
}

I guess similar behavior could be achieved using iterators pointing back to the container incrementing a counter on construction and decrementing it on destruction, but is think this (and iterators in general) are way more complex to implement (and understand) than the for_each method.

One concern would be that the “iterator counter” could overflow, but I am not expecting that many nested iterations.

I guess my question is: Would using a for_each method make more sense for special purpose containers, that do not need iterators for reasons other then iterating over its elements (e.g. specifying a range, or being a reference passed to a container method), since it is more simple to implement and understand (IMO) and (more easily) avoids complexities such as iterator invalidation? Also, are there better implementations then the one described above for this kind of use case?

magento2 – The "hole drilling" in Magento 2 is only the invalidation of FPC (full-page caching)

When looking in the cache of the full page for Magneto 2, I realized that if I try to "drill" my final_price.phtml block in Magento by setting my getCacheLifetime to null, my final price block is still cached by the FPC.

If I disable FPC, I see that my data for my final price block is cached in redis (for now I only have a ttl of 20 seconds) and then it is deleted after its ttl. This seems correct.

So it looks like you need some kind of getIdentity function set to "invalidate" the FPC to load this block.

My point is that it seems that it is not really "drilling", it is simply telling the FPC that a ttl block has finished and it is time for FPC to load a new copy of its block and the page to put it back in the FPC.

Any help would be great!

authentication – session invalidation for 2 browsers

I have an application with authentication using WSO2.
Let's say now that I have the following scenario:
1. Log in to the same applications with the same credential in 2 different browsers (Chrome and FireFox)
2. Change my password for my apps in Chrome
3. I return to FireFox and the application session remains valid without closing session.

My question is, how can I invalidate / log out of the entire session in the whole browser if I change the password?

Thank you!

How to make an invalidation command of ownerID [discord.js]

How it works is that it detects if the person has the permission MANAGE MESSAGES If it does not, then it will verify if it has an identification of "1234567890" If it does, it does not return and passes to the code. How can I do this? I have tried so much knowledge about djs but I can not.

I tried a code like

if (! message.member.hasPermission ("MANAGE_MESSAGES") ||!
["507408804145528832"].includes (message.author.id)) return
message.channel.send (noperm);

but it scans so much that somehow how?