amazon rds – RDS Oracle 11G ORA-01031: insufficient privileges to create materialized view

Here is my situation.
I was sent an RDS Oracle 11g to the manager. This resource was created through a terraform script and the "sysadmin" credentials that were delivered to me

For me, everything is fine to work, but it is a strange scenario

I had permission to recreate views and drop a materialized view, but when I tried to recreate the materialized view I received this error

ORA-01031: insufficient privileges on create materialized view

Google search what I had to have create table Excuse me

but no

select * from user_sys_privs;

"USERNAME","PRIVILEGE","ADMIN_OPTION"
ADMIN,EXEMPT REDACTION POLICY,YES
ADMIN,ALTER DATABASE LINK,YES
ADMIN,EXEMPT ACCESS POLICY,YES
ADMIN,DROP ANY DIRECTORY,YES
ADMIN,CREATE TABLE,NO
ADMIN,SELECT ANY TABLE,YES
ADMIN,RESTRICTED SESSION,YES
ADMIN,ALTER PUBLIC DATABASE LINK,YES
ADMIN,CREATE SESSION,NO
ADMIN,EXEMPT IDENTITY POLICY,YES
ADMIN,GRANT ANY OBJECT PRIVILEGE,YES
ADMIN,UNLIMITED TABLESPACE,YES
ADMIN,CHANGE NOTIFICATION,YES
ADMIN,FLASHBACK ANY TABLE,YES
ADMIN,CREATE MATERIALIZED VIEW,NO

I tried to grant myself (as postgres) and the command works fine but without effects

grant create table to ADMIN;

By the way, I am a rookie in the world of Oracle, I have experience with the SQL server and Postgres. I think this is a silly question, but not for me, I appreciate any help

What are the approaches to update materialized views in Oracle, when the underlying tables are updated frequently?

I am a web developer, I maintain a web application that tracks orders, customers, products, etc., that my client uses internally. I use Oracle 12c, hosted on AWS RDS. My client has just changed some other systems, so we are at a point where the data structures have changed, and I am using a new scheme in Oracle to store new data in the new structures.

So that the web application does not have to be redesigned to work with new data structures, the decision was made to implement materialized views in Oracle that join the new data of the new scheme (manipulated in the "inherited structure") together with the inherited data .

So now I have to deal with updating these materialized views so that the web application has constant access to the latest data. Ideally, the relevant materialized views would be updated every time you receive a new record in the new scheme, but during working hours, you may receive new data every few seconds. A compromise is fine: if the materialized views are obsolete for a few minutes (maybe 5 or (less ideally) 10 minutes), that could be an acceptable situation.

My question is, what approach should I have to update these materialized views? I don't want to overload Oracle with constant updates, and the web application should provide users with a good user experience when reading / writing data to / from Oracle. I am far from being an Oracle / DB expert, so I am not really sure what options there are. I guess I could have a cron job that runs every 5 minutes or something to update the obsolete materialized views one by one, but I wonder if this approach is a bit naive.

Actually, I'm dealing with 14 materialized views (for now), and in my tests, some of them take up to 2.5 minutes to perform a full update.

Created a materialized view in ORACLE that will not be updated

I created a materialized view record along with a materialized view called update_nboe based on the NBOE_ EMPLOYEES_TEST table using the following code

CREATE MATERIALIZED VIEW LOG ON NBOE_EMPLOYEES_TEST WITH primary key;

CREATE MATERIALIZED VIEW update_nboe
REFRESH FAST ON DEMAND
AS
SELECT E.EMP_ID, E.USERNAME ,E.NAME, E.LOCATION , E.TITLE, E.LOCATION_CODE, E.RS_GROUP
FROM NBOE_EMPLOYEES_TEST E;

Then I updated NBOE_EMPLOYEES_TEST by inserting additional records in the hope that the materialized view will be updated and updated on demand after using the following code

exec dbms_mview.refresh('update_nboe',atomic_refresh_test=>TRUE);

However, I see a red cross in my connection panel for the materialized view and it won't update either.

I would appreciate some input.

Replication: scheduled view snapshots (without using materialized views or Oracle Golden Gate)?

I have 40 views in an Oracle 18c GIS database that are used on a map in a work order management (WMS) system.

  • The views are shown on the WMS map through a web / REST service.
  • Views have an average of 10,000 rows per view.

The views join dblink tables in a separate Oracle database and, as a result, are not fast enough to use on the WMS map (3-second map update delay). In addition, it seems a bad idea to calculate the views each time a user updates the map; since the map does not need to be updated in real time.

As an alternative, I would like to take snapshots of the views weekly. The snapshots would be static tables that would be fast on the WMS map.

The capture:

Unfortunately, due to office policy issues, the use of technology such as materialized views or Oracle's Golden Gate to solve this problem is not an option.


What are my options for taking scheduled snapshots of Oracle views (without using materialized views or Golden Gate)?

For example, you could make a .SQL script that truncates static tables and inserts the rows of views into tables. As a rookie, I don't know how efficient or risky that option would be, or if there are better alternatives.

Oracle: scheduled view snapshots (no materialized views or Golden Gate)?

I have 40 views in an Oracle 18c GIS database that are used on a map in a work order management (WMS) system.

  • The views are shown on the WMS map through a web / REST service.
  • Views have an average of 10,000 rows per view.

The views join dblink tables in a separate Oracle database and, as a result, are not fast enough to use on the WMS map (3-second map update delay). In addition, it seems a bad idea to calculate the views each time a user updates the map; since the map does not need to be updated in real time (an unnecessary load on the database).

As an alternative, I would like to take snapshots of the views weekly. The snapshots would be static tables that would work much better on the WMS map (and reduce the load on the database).

The capture:

Unfortunately, due to the political challenges of the office, the use of technology such as materialized views or the Golden Gate of Oracle to solve this problem is not an option.


What are my options for taking scheduled snapshots of Oracle views (without using materialized views or Golden Gate)?

For example, you could make a .SQL script that truncates static tables and inserts the rows of views in the tables (in some type of programming). But as a rookie, I don't know how efficient or risky that option would be, or if there are better alternatives.

Optimization: Does the query optimizer have the right to create temporary (materialized) views if performance improves?

Does the query optimizer have the right to create temporary (materialized) views that can improve performance? Sorry if this question seems very trivial.

Or, in other words, the query optimizer considers plans that involve creating a temporary table, storing the object on disk and then using it for the execution of the query.

python: cross materialized route hierarchy for secondary nodes

I have an ordered list of materialized hierarchy paths:

paths = ('10001', '1000100001', '100010000100001', '100010000100002', '1000100002', '100010000200001', '100010000200002')

I am getting the immediate secondary paths (nodes) for each non-leaf node.

steplen = 5 # size of each hierarchy step
def _traverse_hierarchy(parent='', idx=0) -> Tuple(int, List(str)):
    children_paths = ()
    while idx < len(paths) and paths(idx).startswith(parent):
        path = paths(idx)
        if len(path) == len(parent) + steplen: # direct report
            parent_dr_paths.append(path)

        idx += 1
        if len(path) // steplen != max_depth: # not leaf node
            new_idx, children_paths = _traverse_hierarchy(parent=path, idx=idx)
            print(f'Direct Reports for {path}', children_paths)
            idx = new_idx

    return idx, parent_dr_paths

I'm not worried about getting a stack overflow error (the hierarchy won't be too deep).

How can I improve / optimize this code?

Oracle – How to manage disk space allocation for materialized views?

Summary: I have materialized views in Oracle 11g that seem to take up disk space, unlike the normal tables that mark the rows as deleted and the statistics finally show them as free space (allocated to the table, allowing reuse). The use of table space only grows for materialized views, unlike the statistics of the source tables.
Tested in Oracle 12c with the same results. How to ensure MV reuse space of deleted rows?

What have I done?
I have these materialized partitioned views configured in a separate schema, a separate table space from the source tables (I know they could have dynamically created partitions, call it technical debt).

CREATE MATERIALIZED VIEW replication_schema.origin_table
PARTITION BY RANGE(tbl_timestamp) 
(
    PARTITION tbl_before_2016 VALUES LESS THAN (TO_TIMESTAMP('2016-01-01 00:00:00','YYYY-MM-DD HH24:MI:SS')),
    PARTITION tbl_2016_01 VALUES LESS THAN (TO_TIMESTAMP('2016-02-01 00:00:00','YYYY-MM-DD HH24:MI:SS')),
    PARTITION tbl_2016_02 VALUES LESS THAN (TO_TIMESTAMP('2016-03-01 00:00:00','YYYY-MM-DD HH24:MI:SS')),
...
 PARTITION tbl_after_2025 VALUES LESS THAN (MAXVALUE)
)
REFRESH FORCE ON DEMAND START WITH SYSDATE NEXT sysdate+1/1440
AS SELECT * FROM origin_schema.table;

And they also have some indexes, some global and some local.

CREATE INDEX tbl_account_index ON replication_schema.origin_table (tbl_account DESC) LOCAL;
CREATE INDEX tbl_column1_index ON replication_schema.origin_table (tbl_column1 DESC) LOCAL;
CREATE INDEX tbl_column2_index ON replication_schema.origin_table (tbl_column2 DESC) LOCAL;
CREATE INDEX tbl_column3_index ON replication_schema.origin_table (tbl_column3 DESC);
CREATE INDEX tbl_column4_index ON replication_schema.origin_table (tbl_column4 DESC);

Most of the time they get new rows (approximately 4 M / month), but users have set up a process to remove old rows from the source table every two weeks. They can delete up to 500K / 1M rows from each replicated table, each time.

There are seven views materialized in this scheme. Each extracts data from a source table.

What we see is that, contrary to what happens with the source table, the space reported as free in dba_ Tables do not change over time and the use of table space only grows from these materialized views.

If I wait a moment after deleting rows and running this query:

select df.tablespace_name "Tablespace",
totalusedspace "Used MB",
(df.totalspace - tu.totalusedspace) "Free MB",
df.totalspace "Total MB",
round(100 * ( (df.totalspace - tu.totalusedspace)/ df.totalspace))
"Pct. Free"
from
(select tablespace_name,
round(sum(bytes) / 1048576) TotalSpace
from dba_data_files 
group by tablespace_name) df,
(select round(sum(bytes)/(1024*1024)) totalusedspace, tablespace_name
from dba_segments 
group by tablespace_name) tu
where df.tablespace_name = tu.tablespace_name and df.totalspace <>0 ;

It shows an increase in Free MB column (space in dba_data_files less allocation declared in dba_segment) for the source table space, but the MB used for replication never decreases, only increases in new rows (more than three years now)

Tablespace      Used MB    Free MB  Total MB   Pct. Free
SYSTEM          491        9        500        2
SYSAUX          1628       162      1790       9
UNDOTBS1        0          9645     9645       100
ORIGIN_DATA     2705       1391     4096       34
ORIGIN_REP_DATA **1975**   2121     4096       52

That table space only contains these materialized views. There is no other object that is being used.

I tried the advisor to see what I can do:

variable id number;
begin
  declare
  name varchar2(100);
  descr varchar2(500);
  obj_id number;
  begin
  name:='REPCHECK';
  descr:='Replication advisory';

  dbms_advisor.create_task (
    advisor_name     => 'Segment Advisor',
    task_id          => :id,
    task_name        => name,
    task_desc        => descr);

  dbms_advisor.create_object (
    task_name        => name,
    object_type      => 'TABLE',
    attr1            => 'REPLICATION_SCHEMA',
    attr2            => 'ORIGIN_TABLE',
    attr3            => NULL,
    attr4            => NULL,
    attr5            => NULL,
    object_id        => obj_id);

  dbms_advisor.set_task_parameter(
    task_name        => name,
    parameter        => 'recommend_all',
    value            => 'TRUE');

  dbms_advisor.execute_task(name);
  end;
end; 
/

And says

Perform a reorganization on the origin_table object, the estimated saving is xxx bytes

If I try to consult recommendations through the procedure:

select
   tablespace_name,
   allocated_space,
   used_space reclaimable_space
from
   table(dbms_space.asa_recommendations('TRUE', 'TRUE', 'ALL'))

Returns

ORIGIN_REP_DATA 100663296   38419844

But I only get errors when I try to execute the SHRINK SPACE or COMPRESS options

ORA-10635: Invalid segment or type of table space
10635.00000 – "Invalid segment or table space type"
* Cause: the segment cannot be reduced because it is not in the automatic segment space
Managed table space or not is a segment of data, index or lob.
* Action: check the table space and segment type and reissue the declaration

Long story short: What can I do to avoid wasting disk space on these materialized views? How to perform maintenance on them? Should I drop them and recreate them? The use of data files in the table space is growing around 10 GB per month and I am running out of time (and space). Thank you.

postgresql – Postgres Trigger that updates the materialized view without a record

We have multiple inserts in a table that has a trigger in the statement to update the materialized view after the inserts. However, although all the entries in the table work fine, the last record inserted in the source table is not reflected in the materialized view. Any idea why the last confirmation is not reflected in the materialized view?

Optimization: How to correct the materialized view in counts with several unions?

I am optimizing a query using the view materialized in PostgreSQL, but the query logic does not work in the mat view.

I want to optimize the query, which implies multiple unions and its execution time is also greater, so I tried the same query in the Materialized view using PostgreSQL, but the query logic is incorrect when it comes to the mat view.

I tried this mat view creation in PostgreSQL 11.

In the following code, three key tables 1.posts 2.Topics 3.Post_topics. The Post_topics tables contain post_id and topic_id.

The topic tables are a list of topics (each topic has several values; for example, if the topic is egg, the values ​​associated with the egg were & # 39; breakfast & # 39 ;, & # 39; dinner & # 39 ;, & # 39; cheese & # 39 ;, etc.) and the tables of publications have publications related to the topics. Each topic can have multiple posts.

I want to get the counts from the topic table that contains values. So how many count for breakfast, dinner, cheese when the identification of the subject is egg?

Original Inquiry:

    SELECT t.id as topic_id, t.value as value, t.topic as topic, COUNT(c.topic_id) as count 
FROM post_topics a 
JOIN posts b ON a.post_id = b.id 
JOIN post_topics c on a.post_id = c.post_id 
--JOIN post_locations pl ON pl.post_id = c.post_id 
JOIN topics t on c.topic_id = t.id --AND t.topic = 'cuisine' 
WHERE a.topic_id = '1234547hnunsdrinfs' 
AND t.id != '1234547hnunsdrinfs' 
AND b.date_posted BETWEEN ('2019-06-17'::date - interval '6 month') AND '2019-06-17'::date 
     GROUP BY t.id, c.topic_id 
     ORDER BY count DESC 
     LIMIT 20

Mat view:

Create MATERIALIZED VIEW top_associations_mv as 
SELECT t.id as topic_id, t.value as value, t.topic as topic, COUNT(c.topic_id) as count 
FROM post_topics a 
JOIN posts b ON a.post_id = b.id 
JOIN post_topics c on a.post_id = c.post_id
JOIN topics t on c.topic_id = t.id 
WHERE t.id != c.post_id and (b.date_posted > (('now'::text)::date - '6mons'::interval)) 
GROUP BY t.id, c.topic_id 
ORDER BY count DESC

My expected result is:

I want to get the counts from the topic table that contains values. So how many count for breakfast, dinner, cheese when the identification of the subject is egg? But in the actual result, the count is incorrect.