Oracle 12c – Logically There Is A Redo Group But Not Physically – Database Can’t Open

I have an Oracle 12c Database for my enterprise manager.

I could not connect to enterprise manager because the database was closed.

I opened listener. However, the database remained mounted mode. When I want to open the database in read & write mode, I get the error as follows:

SQL> alter database open;

ERROR at line 1:

ORA-00313: open failed for members of log group 3 of
thread 1

ORA-00312: online log 3 thread 1:
‘/path/of/redo/redo03.log’

ORA-27037: unable to
obtain file status

Linux-x86_64 Error: 2: No such file or directory
Additional information: 3

When I go to the file location of redo03, there is no redo03.

So it’s logically, but not physically.

There are some queries and results:

SQL> col member format a50
SQL> select group#, type, member from v$logfile;

    GROUP# TYPE    MEMBER
---------- ------- --------------------------------------------------
         3 ONLINE  /path/of/redo/redo03.log
         2 ONLINE  /path/of/redo/redo02.log
         1 ONLINE  /path/of/redo/redo01.log
         4 ONLINE  /path/of/redo/redo04.log

SQL> select group#, thread#, sequence#, bytes/1024/1024, members, status from v$log;

    GROUP#    THREAD#  SEQUENCE# BYTES/1024/1024    MEMBERS STATUS
---------- ---------- ---------- --------------- ---------- ----------------
         1          1      61525              50          1 INACTIVE
         4          1          0              50          1 UNUSED
         3          1      61527              50          1 CURRENT
         2          1      61526              50          1 INACTIVE

I tried doing this: create redo group 4. Drop redo group 3 and recreate redo group 3.

There was no problem creating redo group 4. I used this command below:

alter database add logfile thread 1 group 4 '/path/of/redo/redo04.log' size 50m;

But when I want to drop group 3, it gives error as follows:

SQL> ALTER DATABASE DROP LOGFILE GROUP 3;
ALTER DATABASE DROP LOGFILE GROUP 3
*

ERROR at line 1:

ORA-01623: log 3 is current log for instance dbname (thread 1) –
cannot drop

ORA-00312: online log 3 thread 1: ‘/path/of/redo/redo03.log’

I wanted to try disable the thread that redo group is connected to and after that I wanted to drop redo group 3. But it give error as follows:

SQL> select thread#, status, enabled from v$thread;

   THREAD# STATUS ENABLED
---------- ------ --------
         1 OPEN   PUBLIC


SQL> alter database disable thread 1;
alter database disable thread 1
*

ERROR at line 1:

ORA-01109: database not open

When I try to switch logfile, it gives error as follow:

SQL> alter system switch logfile;
alter system switch logfile
*

ERROR at line 1:

ORA-01109: database not open

As a result; there is a redo group 3 logically but there is no physically file. I am sure that nobody has deleted the file because I checked it with the history command on linux system. I can’t open the database because redo group 3 cannot find. I can’t logfile switch, I can’t disable thead, I can’t delete redo group 3 and recreate it. Please can you help with this?

Best regards,

Existe realmente um melhor database? MySQL ou PostgreSQL? ou é simplesmente gosto?

Estou estudando faz 1 ano programação, já me dou com front-end e uso PHP + db no back
Pra melhorar meus futuros sites, é melhor eu usar o PostgreSQL ou MySQL?
Obrigado!

javascript – Is it possible to update the database without referring to the ID number?

I am using Javascript on Applab. I have the code:

var player = {};
player.Username=" ";
onEvent("confirmUsernameButton", "click", function() {
  updateRecord("AllUserData", player, function() {
    player.Username = getText("Username_input");
  });
});

However, it is not updating the database at all. Can I update the database with an event.click without referring to the id number, or could I use each user’s specific “User ID” to update the database instead? Any help is appreciated.’

Link to app: https://studio.code.org/projects/applab/P2Vrn_Mmpb1g0cXOoTOL4XzFdIiaQcQAvwxTA_IaGlQ
To access the code and database, press “view code” at the top right corner of the screen.

mariadb – Best way to combine many disparate schemas for database table creation?

I have a bunch of data that consists of public records from the state government dating back to the early 90s. Along the way, the data organization and attributes have changed significantly. I put together an Excel sheet containing the headers in each year’s file to make sense of it and it came out like this:

enter image description here

As you can see by looking at my checksum colum on the left, there are 8 different schemas from 1995 through 2019. Also, you can see that the data between each can vary quite a bit. I’ve color-coded columns that are logically similar. Sometimes the data is mostly the same but the names of the columns have changed. Sometimes, there is different data altogether that appears or disappears.

I think it is pretty clear that the best goal here is to have 1 table combining all of this information rather than 8 disparate tables, since I want to be able to query across all of them efficiently. Each table contains ~150,000 rows so the table would have around 4 million records. Each table has 55-60 fields approximately.

I’ve been struggling for a few days with how to tackle it. Half of the files were fixed-width text files, not even CSVs, so it took me a long time to properly convert those. The rest are thankfully already CSVs or XLSX. From here, I would like to end up with a table that:

  • includes a superset of all available logically distinct columns – meaning that the ID number and ID Nbr columns would be the same in the final table, not 2 separate tables
  • has no loss of data

Additionally, there are other caveats such as:

  • random Filler columns (like in dark red) that serve no purpose
  • No consistency with naming, presence/absence of data, etc.
  • data is heavily denormalized but does not need to be normalized
  • there’s a lot of data, 2 GB worth just as CSV/XLS/XLSX files

I basically just want to stack the tables top to bottom into one big table, more or less.

I’ve considered a few approaches:

  • Create a separate table for each year, import the data, and then try to merge all of the tables together
  • Create one table that contains a superset of the columns and add data to it appropriately
  • Try pre-processing the data as much as possible until I have one large file with 4 million rows that I can convert into a database

I’ve tried importing just the first table into both SQL Server and Access but have encountered issues there with their inability to parse the data (e.g. duplicate columns, flagging columns with textual data as integers). In any case, it’s not practical to manually deal with schema issues for each file. My next inclination was to kind of patchwork this together in Excel, which seems the most intuitive, but Excel can’t handle a spreadsheet that large so that’s a no-go as well.

The ultimate goal is to have one large (probably multi-GB) SQL file that I can copy to the database server and run, maybe using LOAD IN FILE or something of that sort – but with the data all ready to go since it would be unwieldy to modify afterwards.

Which approach would be best? Additionally, what tools should I be using for this? Basically the problem is trying to “standardize” this data with a uniform schema without losing any data and being as non-redundant as possible. On the one hand, it doesn’t seem practical to go through all 25 tables manually and try to get them imported or try to change the schema on each one. I’m also not sure about trying to figure out the schema now and then modifying the data, since I can’t work with it all at once? Any advice from people who have done stuff like this before? Much appreciated!

Does a transaction lock the entire database

I am taking over for someone and at the beginning of his script the previous guy

mysql – Database Design for Pharmacy store with wholesale and retial

I have designed a DB ( Inventory Management ) for Pharmacy Store which will sell medicine both wholesale and retail. I store each units with its conversion for medicine in a separate fields for retail selling.

enter image description here

When I generate the inventory movement report or sum transactions, I need to calculate the each ordered units and it makes my logic very complex. Is there any better approach for this ?

Thanks.

database design – Is it better to omit associative tables in displaying many to many relationship

Let’s say I have 2 entities Student and Class with many to many relationships. Usually, in the textbook, it is recommended that we create another associative table (maybe called Enrollment) to convert many-to-many relationships becoming 2 one-to-many relationships.
enter image description here

So this design above is correct and I have no problem with it.

However, I also thinking of a more simple design like this

Student
id name class_id
1  Jake 1
2  Jake 2
3  John 1


Class
id name
1  Math
2  English
3  Physics

And I think it can also work fine without even creating the third tables. (and only Student table has the foreign key)

So my question is what are the pros and cons of the second method ( without associate tables). Is there any particular case that making the text-book method (1st solution) better than the 2nd solution and vice versa. Thanks

database – Storing static data idea — is it best practice?

If the data truly does not change, or changes on such long time frames, or must be crucially fast to access then there is wriggle room to store it directly in the application.

If you are going to do that however it makes much more sense to leverage the type system of the language you are using to provide a strong name, strong behaviour, and immutability guarantees. Particularly with a strong name, a code editor can trivially locate usages of the object.

But if none of those apply, you are probably edging on lazyness, or a bad design choice. In which case push it back into the database and stop being lazy. If it worries you, measure it, collect the data and see what is what. Maybe you really are in the first category and this makes sense. Just don’t presume so.

This probably sounds trite, but a json data file isn’t compiled.

  • If it were corrupted you wouldn’t know till it was used.
  • And depending on your configuration system, the developer configs probably are not used in production, so another layer of moving parts to break down.
  • And being an external dependency are probably mocked out in unit tests/use a safe short list instead of the real production file.

This is a large source of risk and errors. At least with a database/compiled source these structural issues can be detected early and fixed. Also being part of the stock data available to the program they probably participate in a few unit/integration tests.

There are some systems were a json data file would be on par, but they provide at least two functions. First they prove that the file can be loaded, and second that when loaded the data is in the right shape.

backup – Can’t import MySQL database from mysqldump

I am interested in backing up a WordPress (multisite) website.

Because this WordPress lives on a shared hosting environment, my options (for daily automated backups not managed by most host) are a little limited.

To work around that, I’ve written a simple Bash script that pulls in the files via rsync. That is running perfectly.

To back up the MySQL database, I’ve created a daily Cron job which of course runs serverside. I then use rsync to pull that down into my locally backup archive alongside the files.

I decided to test the MySQL database for restorability today by trying to import it, via phpmyadmin, into another Cpanel.

These are the settings that I used.

enter image description here

The import fails with the following message:

enter image description here

It’s been years since I did much with MySQL databases. Could anybody point me in the right direction to try figure out what exactly is failing? It would be much appreciated.

database – Job and Employee Performance Tracking with Product Safety

We are looking to connect a WordPress site to our database where an employee can scan an order barcode with an RFID scanner that was generated from our database and have the WordPress site query the order, pulling into a form or table the product names on the order as well as the safety information of those products from specific tables/columns in SQL, then when the employee is done with the order they select done in WordPress so it records a start time based on when they scanned the order barcode and then a finished time that records when they mark as completed as well as a place to scan an employee barcode so it logs the employee name. Looking for an easy way to do this without extensive programming knowledge.