databases – Help accesing MongoDB API data using Node.js website

Need help accessing data from MongoDB using node to create a website.

I have data being extracted to MongoDB from the API. It is stored in the json format.

When trying to extract the data to a website using node and express to host the website, and monk to connect to mongo, I cannot get past the object field in the json data in the database. The html page displays it as ‘object’ instead if the data I need to see, and I cannot build the hierarchy to display past the object field. I need my website to display the full data for the track information that is in the database is there a way to convert this data so I can display it. Please correct the code or write your own to solve this issue. Thank you in advance.

attached are screenshots of the database data, the app.js, index.js, and the ejs file being displayed, as well as the output on the website.

Database screenshot
App.js screenshot
Index.js screenshot
Main website code screenshot
Current output screenshot

SQL Server – Benefits of splitting databases across different logical drives

We’re about to start a project to migrate a large DWH to new physical servers in a new data centre. The current server spec is SQL Server Enterprise 2016 SP2 running on Windows 2012 R2. The new servers will be MSSQL 2019 Enterprise running on Windows 2019.

SAN storage for the current and new servers is an all flash storage array. In the current environment as well as separating data and log files onto different logical drives, different databases (data files only) are also split across different logical drives.

  • Local SSD – TempDb
  • Logical Drive 1 – log files
  • Logical Drive 2 – data files for staging databases
  • Logical Drive 3 – data files for user facing databases
  • Logical Drive 4 – data files for support databases (ReportServer, MDS database)

As part of the server migration I am considering combining all data files onto a single logical drive.

  • Local SSD – TempDb
  • Logical Drive 1 – log files
  • Logical Drive 2 – data files

As well as database file management is there any performance benefits to keep the data files split across different logical drives? Does multiple logical drives give better IO, even though ultimately it’s the same physical storage array?

mysql – Fixing crashed MariaDB databases on a cPanel server

I have a small cPanel server which I use for my clients’ projects and my own personal projects. This is also shared among a few of my friends who chime in to keep the disks spinning. Recently I’ve noticed a huge increase in CPU usage for mysql and few of my friends reported that their DB’s were crashed and fixing it from the cPanel repair tool helped.

However, this kept happening and I’m trying to find a solution for this. My server has 16GB of RAM with RAID 1 Disks. The processor is an old one called W3520.

When I restart mysql using systemctl restart mysql following log appears.

2020-11-24 19:04:12 0 (Note) InnoDB: Using Linux native AIO
2020-11-24 19:04:12 0 (Note) InnoDB: Mutexes and rw_locks use GCC atomic builtins
2020-11-24 19:04:12 0 (Note) InnoDB: Uses event mutexes
2020-11-24 19:04:12 0 (Note) InnoDB: Compressed tables use zlib 1.2.7
2020-11-24 19:04:12 0 (Note) InnoDB: Number of pools: 1
2020-11-24 19:04:12 0 (Note) InnoDB: Using SSE2 crc32 instructions
2020-11-24 19:04:12 0 (Note) InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
2020-11-24 19:04:12 0 (Note) InnoDB: Completed initialization of buffer pool
2020-11-24 19:04:12 0 (Note) InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
2020-11-24 19:04:13 0 (Note) InnoDB: 128 out of 128 rollback segments are active.
2020-11-24 19:04:13 0 (Note) InnoDB: Creating shared tablespace for temporary tables
2020-11-24 19:04:13 0 (Note) InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
2020-11-24 19:04:13 0 (Note) InnoDB: File './ibtmp1' size is now 12 MB.
2020-11-24 19:04:13 0 (Note) InnoDB: Waiting for purge to start
2020-11-24 19:04:13 0 (Note) InnoDB: 10.3.27 started; log sequence number 43130164885; transaction id 86603248
2020-11-24 19:04:13 0 (Note) InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
2020-11-24 19:04:13 0 (Note) Plugin 'FEEDBACK' is disabled.
2020-11-24 19:04:13 0 (Note) Server socket created on IP: '::'.
2020-11-24 19:04:13 0 (Note) InnoDB: Buffer pool(s) load completed at 201124 19:04:13
2020-11-24 19:04:13 0 (Note) Reading of all Master_info entries succeeded
2020-11-24 19:04:13 0 (Note) Added new Master_info '' to hash table
2020-11-24 19:04:13 0 (Note) /usr/sbin/mysqld: ready for connections.
Version: '10.3.27-MariaDB'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MariaDB Server
2020-11-24 19:05:38 52 (ERROR) Got error 127 when reading table './(database)/(table)'
2020-11-24 19:05:38 52 (ERROR) mysqld: Table '(table)' is marked as crashed and should be repaired
2020-11-24 19:05:38 52 (ERROR) mysqld: Table '(table)' is marked as crashed and should be repaired

This also shows a few other tables to be repaired. Which I have now repaired using PHPMyAdmin interface of WHM. Then reloaded mysql using systemctl restart mysql

However, the problem still seems to linger around as my CPU usage from mysql is ±100%

I also tried repairing it with mysql_upgrade -u root --force -p but didn’t yield any result. I also ran mysqlcheck --repair --all-databases to try to fix the crashed databases but it didn’t yield any result either.

Following is my my.cnf file content.

# This group is read both both by the client and the server
# use it for options that affect everything

# include all files from the config directory
!includedir /etc/my.cnf.d

performance_schema = off

I’m running following versions,

MariaDB 10.3.27
cPanel/WHM v90.0.17
Linux 3.10.0-1160.6.1.el7.x86_64 #1 SMP Tue Nov 17 13:59:11 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

I just found the following errors in the log too.

Thank you.

java – Pattern for syncing databases with undo option

I work on a large and old application consisting of a server and a fat client. Part of what the application does is handle a large-ish (a few 100MBs) database of frequently changing data (~a dozen rows per second). Because of the size, there is a master copy of the database on the server and a local copy on every client and they need to be synced.
Changes can either be causes by outside events coming to the server or they can come from user interactions. In both cases, the master and local copies need to be updated. If the change came from a user interaction, there is a time window of several minutes during which the user can click “undo” to revert the changes their last action has caused. That needs to be synced with all the other clients as well. Depending on what business object has changed, additional business logic must sometimes be executed when a change happens.

We use JPA/Hibernate as an ORM layer between the database and our code, both on the client and the server side. But there are different database backends on both sides.

At the moment, our solution is old, half-baked legacy code: Diffs between objects are calculated based on string representations of their attributes. The corresponding old/new pairs of string-values get distributed for syncing and stored in a separate table for later undos. Lots of things can and sometimes do go wrong the way it is now.

What is the preferred way of doing this? I’ve looked through some Hibernate docs and tutorials, but it seems there is no ready made solution with JPA that does this out of the box. I could probably design something that’s less half-baked, maybe three-quarters-baked with @Audit and Entity Listeners. But I’m assuming that some smart people have already come up with some design pattern that realises this. Can someone please point me in the right direction?

mariadb – I deleted a folder from /var/lib/mysql and now all of my databases seem to be unaccessible

I was receiving this error when trying to drop a database:

ERROR 1010 (HY000): Error dropping database (can't rmdir './redpopdigital@002ecom', errno: 39 "Directory not empty")

So I went into /var/lib/mysql and just did an rm -rf. I did not know that this would screw with literally every other database.

Now it seems all of my databases are inaccessible.

I tried this as a troubleshooting step:

ubuntu@blainelafreniere:~$ mysqlcheck --repair blainelafreniere -u root -p
Enter password: 
Error    : Table 'blainelafreniere.wp_commentmeta' doesn't exist in engine
status   : Operation failed
Error    : Table 'blainelafreniere.wp_comments' doesn't exist in engine
status   : Operation failed
Error    : Table 'blainelafreniere.wp_links' doesn't exist in engine
status   : Operation failed

what’s weird is, all of the data appears to be chilling in /var/lib/mysql… but I can’t access it for some reason?

Is there any hope of recovering the data from /var/lib/mysql or am I completely screwed?


Storing external databases data into one

I need to store different datasets coming from a provider where each dataset has it’s own release path.
These datasets can be combined together to get the full picture of the data available from the provider.

I know from the doc of the publisher the combination of versions that are allowed.
My pain point is that I need to keep track of the version for each dataset.

Example of data:

“Main” dataset from publisher “ABC” has version “1.0”.
“Ext” dataset from publisher “ABC” has version “release_3”

My schema is as follow:






Based on that, is it a problem that FK “VersionId” in tables “Main” and “Ext” references a different record in table “Version” ?

I’m afraid that any user querying the DB will not expect to have diverging Version (as the FK name is identical in both tables).
Unfortunately that’s the reality of the data provided by the publisher.

Is there a better design to accomplish the same result ?

NB: It is possible that in the future, datasets from different providers need to be combined.


Deleted all MySQL databases on phpmyadmin

I’m on Android (Samsung A10e) and it gives me the option in the KSWEB app to "reinstall all components". Should I use this? What does it do?

recovery – Databases missing after exporting server and rebuilding master DB

We had a server running Microsoft SQL Server 2014 databases on Hyper-V. I exported it (with C: and D: drives) and opened on other machine.

I could not start the MSSQLSERVER service because it said the master db needed to be rebuilt.

I’ve rebuilt the master db, but now the databases are empty, I cannot see databases except master, model, msdb, and tempdb.

I have all databases (mdf and ldf files) in this folder D:Microsoft SQL ServerMSSQL12.MSSQLSERVERMSSQLDATA. How to restore them and not lose all functionality?

databases – Why is it a big deal here on this website and on the internet about the “ISP spying on people’s browsing activities”?

Why is it a big deal here on this website and on the internet about the “ISP spying on people’s browsing activities”?

Because in the end, ISPs are going to delete all User data as per their own data retention policies. If in the end everything is going to get deleted, what’s the danger to anyone? Why is it a big deal? I don’t understand.