postgresql – Storing sparse and vilotile data in an analytical database

I want to perform an analytics of my company deliverables: financial metrics, production volumes, etc.
For that, I’m creating a database on Postgres.

The main problem is that, I’m not sure which parameters (metrics) I would need finally – I would add parameters on demand. Therefore, creating an ER-model seems difficult for me now – I don’t now how many columns and which columns would be, and should I use different tables for different groups of information therefore.

I’ve come up with the following EAV model:

Parameters
|ID| title       |
| 1| Revenue     |
| 2| Expenditures|  
ITEMS
|ID| parameter_id| title          |
| 1|     1       | online shopping|
| 2|     2       | new computers  |
DATA
|ID| item_id| date       | value|
| 1|    1   | 2020-01-01 | 1000 |`

I made a research and it’s said, that this model would be difficult for analytics – different aggregation functions, data change on time interval, etc.

QUESTION

  1. Is this model acceptable in my case?
  2. What performace issues may I face and how to overcome?
  3. Should I move to an ER-model later (create a separate table for each parameter with a separate column for each item)? Or, it’s also appropriate to use VIEWS with pivoting data?