c# – Logging to database in ASP.NET Core Entity Framework Core application

I have an ASP.NET Core Entity Framework Core application and I want to implement my own custom logger. I don’t want to use ready solutions like NLog or Serilog.

I added a new class library project (“Logger”) to my application and added classes that do the logging. Mostly they call the service provider to get the db context and add a log object to it and then save changes.

The problem is that my db context is located in another project of type class library (“Data”), so “Logger” references “Data” in order to use the db context class. However to be able to save logs in the database I also need the db context to have a db set of type “Log”, which would mean a circular dependency.

How do I remedy this situation?

mobile application – Should a navigation be specific to a page?

Let’s say a mobile application has 2 interfaces: one that lists items, and the other that displays the details of one of these items. Both of these interfaces have the same navigation bar on top of the screen.

Should the 2 interfaces be 2 pages, or 2 tabs?

2 pages mean that when I click on an item from the list, a new page opens on top of the current one.
2 tabs mean that there is 1 page, and when I click on an item, the content of the page changes, except for the navigation, that stays there.

It feels weird to me that 2 pages would have the same navigation bar. If the navigation bar is the same, it probably means that these pages should not be pages, but tabs in a single page.

I checked on Facebook, Twitch and Slack, and I never saw a page having the same navigation bar as its parent.

So, if both interfaces have the same navigation, should these interfaces necessarily be tabs and not pages, or is it fine that they are pages?

web application – Is running bash script that is taking arguments from site dialog box a good idea?

I’m building a site that will use youtubeAPI to keep track of playlist changes.
In order for 3rd party to use it I would supply a dialog box in which user would type his/hers playlistID – this would be read and then put as an argument into bash script that in turn runs curl/python scripts to connect with API (ran on my machine) and another bash script that would mkdirs on my disk.

Does this potentially endanger me/my files somehow ?
Can someone input some magic command that would do “rm * -f” or similar malicious endeavor ?
Should I use some external server instead of my machine ?

I know nothing about security, Ive read few topics here but didnt find similar problem.

I want to create application for linux system and i want to know how to modify open source application like vestacp webmin

I am a very beginner in linux development..plz explain what i want to learn to build linux application or develop or modify open source application

c++ – how to do unit test elegant in my application

I have a application deal with graph computation. I want cover unit test on to it, but I found it is hard to do the test.

The main class is shown as follows:

  • Grid store the graph strcture

  • GridInput parse inputfile and save into Grid.

  • GridOperatorA do some operator on Grid.

  • GridOperatorB do some operation on Grid.

the production code is some thing like

string configure_file = "data.txt";
GridInput input(configure_file);
Grid grid = input.parseGrid();
GridOperatorA a;
GridOpeartorB b;

I found the code is hard to test.

My unit test code shown as follow

// unit test for grid input
string configure_file = "data.txt";
GridInput input(configure_file);
Grid grid = input.parseGrid();
// check grid status from input file
assert(grid.someAttribute(1) == {1,2,3,4,...,100}); // long int array hard to understand
assert(grid.someAttribute(5) == {100,101,102,...,200}); // long int array hard to understand
// unit test for operator A
string configure_file = "data.txt";
GridInput input(configure_file);
Grid grid = input.parseGrid();
GridOperatorA a;
// check grid status after opeator A
assert(grid.someAttribute(1) == {1,3,,7,4,...,46}); // long int array hard to understand
assert(grid.someAttribute(5) == {59,78,...,32}); // long int array hard to understand
// unit test for operator B
string configure_file = "data.txt";
GridInput input(configure_file);
Grid grid = input.parseGrid();
GridOperatorA a;
GridOperatorA b;
// check grid status after opeator B
assert(grid.someAttribute(1) == {3,2,7,9,...,23}); // long int array hard to understand
assert(grid.someAttribute(5) == {38,76,...,13}); // long int array hard to understand

In my option, my unit test is not good, it have many backness

  • the unit test is slow, in order to test OperatorA,OperatorB it need to do file IO

  • the unit test is not clear, they need to check the grid status after operator, but check a lot of array is hard for programmer to understand what the array stand for. a few days later, programmer can not understand what have happened.

  • the unit test is only for one configure file, if I need to test grid from many configure file, there will be even more array hard to understand.

I have read some technique to break dependency, such as mock object. I can mock the grid read from configure file. But the mock data is just like the data store in configure file. I can mock the Grid after operatorA, but the mock data is just like the grid status after operatorA. They will also leads to a lot of array hard to understand.

I do not know how to do unit test elegant in my situation. Any voice is appreciate. Thanks for your time.

Application update is rejected by Google Play Store

My app update got rejected, from publishing in Play Store, without getting any further informations or explanation

Your app has been rejected and wasn’t published due to a policy
violation. If you submitted an update, the previous version of your
app is still available on Google Play.

FYI : I published a different application with the same UI/UX but different content in January 2020.

application design – Should I store data from third parties?

should I also store all the data

It is not your duty. That does not mean you can’t, nor that there is no benefit to it. Storing the data, at least the data you access, would be a cache. And a cache is useful if reading the state is more common than updating it.

Will it be over-redundant or is storing the data by myself a protection against something?

A common trade off is memory in exchange for performance. That is what you would be doing here. Less latency getting the state, because you don’t actually go to the third party API, but read the copy of the state stored in memory.

It is protecting you from high latency and intermittency in the I/O connection, if there is any.

Of course, you profile. If with your cache it happens to be slower… Well, you know, don’t do that.

What points should be taken into consideration?

Who owns the data? Can the data be mutated elsewhere unexpectedly? Does the API support the concept of transactions? Are you using a library that already does caching?

If the state behind the third party API changes, what you have stored is not correct anymore. You are facing one of the hardest problems: when to invalidate cache.

If your application follows a simple event driven design, you can copy the portion of the state relevant to handle the current event (and you will naturally do that, as the response of the API is likely to be a copy anyway), and you keep that copy as long as you handle the current event (pretending it does not change during that period). And if the data can be modified unexpectedly elsewhere, that is the extend at which you would keep that information in your system. You should always assume that the data has changed since you read it (and transactions would be useful to make coherent updates, if available).

However, if you know your application is the only one modifying the data (despite it being behind a third party API), then you are free to use more complex cache schemes. Except, if you are using a library that already does caching for you.

Note that third party API does not imply remote. In fact, any library you import in your project that was not written by you, has a third party API.

web development – IIS – Limit usage of each web application

I have two web applications hosted locally on my IIS. One is responsible for the core functionality of my website, and the other is responsible for executing background tasks (database tasks, API calls etc..using Hangfire) . It seems that when a heavy background task is being executed, my main application seems to slow down when executing SQL tasks, Ajax requests, or making any other API calls.

How can I set the usage limit of each application?

For example

Main application: 70%

Background Tasks: 30%

Also I’m planning to host my applications on Azure soon, so is there anything else that I should look for to prevent this from happening on production environment?


Best way to schedule an application on raspbian

I’m running a raspberry pi, and I would like to run a specific application (qbittorrent-nox) between 00:00 and 12:00.

Currently the service is scheduled using systemd, but I don’t know what the best way is to schedule it.
I am looking for a solution which does not depend on the time I start the raspberry pi.

If you need more information, please let me know.

Thanks for the help in advance!

c# – How to organize database access logic for the infrastructure and application layer when avoiding ORM tools?

ORMs like Entity Framework do not preclude you from using Raw SQL queries.

If Entity Framework is a bit too much for your liking, consider using Dapper or any of a number of different micro-frameworks. These tools remove a lot of the hassle of making a connection to the database and managing query parameters without encountering Little Bobby Tables, allowing you to focus your time and effort on writing the SQL queries.

ORMs also serve as your Data Access Layer, abstracting CRUD operations away from the more interesting Service Layer of your application. If you still feel like writing your own Data Access Layer, you can still do so. You will need one class and four methods for each table in your database (Create, Read, Update & Delete).

The way you manage your connection can vary. Depending on how quickly MariaDB can open connections and whether or not it caches connections, you might want to just open a connection for each query. You could also do it at the Aggregate level if you’re practicing DDD. Or, you could simply open a connection and leave it open. It really all depends on your application and what you want to do with it.

One file per SQL statement sounds like a bit much. Try putting your SQL statements inside your entity classes; it’s a very convenient place to stash them.

Folder structures are largely a matter of taste. There’s no “standard,” and everyone does them differently.

Further Reading
P of EAA : Data Mapper
P of EAA : Repository
P of EAA : Service Layer