mysql – SQL – count rows with same value

I would like to group and list all the rows that have the same value (gruppo_muscolare.nome) and count how many values (esercizio.nome) are related to the same value, but if I use GROUP BY then I only get the single results without the full list.

SELECT
gruppo_muscolare.nome,
esercizio.nome,
COUNT(gruppo_muscolare.nome) AS counter
FROM tabella_allenamento, scheda, esercizio_scheda, esercizio, gruppo_muscolare
WHERE tabella_allenamento.id = 29 AND scheda.id_tabella=tabella_allenamento.id
AND esercizio_scheda.id_scheda=scheda.id AND esercizio.id=esercizio_scheda.id_esercizio
AND gruppo_muscolare.id=esercizio.id_gruppo_muscolare
GROUP BY gruppo_muscolare.nome, esercizio.nome, 
gruppo_muscolare.id 
ORDER BY counter DESC

I get:

nome                  nome                 counter 
pettorali          Chest Press               1
pettorali      incline press hammer          1
quadricipiti       Leg Curl                  1

while I would like to get:

nome                  nome                 counter 
pettorali          Chest Press               2
pettorali      incline press hammer          2
quadricipiti       Leg Curl                  1

If I use the GROUP BY statement with only a value:

SELECT gruppo_muscolare.nome,
esercizio.nome, 
COUNT(gruppo_muscolare.nome) AS counter 
FROM tabella_allenamento, scheda, esercizio_scheda, esercizio, gruppo_muscolare
WHERE tabella_allenamento.id = 29 AND scheda.id_tabella=tabella_allenamento.id
AND esercizio_scheda.id_scheda=scheda.id AND esercizio.id=esercizio_scheda.id_esercizio
AND gruppo_muscolare.id=esercizio.id_gruppo_muscolare
GROUP BY gruppo_muscolare.nome
ORDER BY counter DESC

then I get:

nome                  nome                 counter 
pettorali          Chest Press               2
quadricipiti       Leg Curl                  1

that it’s what I want but a result is missing.

How can I list ALL the results and at the same time also get the correct counter that counts how many esercizio.nome there are for each gruppo_muscolare.nome?

Thank you!

mysql – Why cannot use buff/cache?

It seems mysqld allocates too much memory as buff/cache:

free -m
              total        used        free      shared  buff/cache available
Mem:            990         448          96          36         445         326
Swap:           511         511           0

Now I cannot start httpd service because it “Failed to fork: Cannot allocate memory”.

I wonder why this happens? Why the 445M buff/cache cannot be reclaimed and used for httpd?

information schema – How to use a query to get the definition of MySQL database objects (specifically the variables they use)

I have written a query to get most of the useful information about the routines in a MySQL database for a database documenter that I am developing.

The code below will return routine name, routine type, routine param list, routine return value (if any), routine definition and routine comment (if any).

However I can’t seem to work out how to return any variables that are used by each routine (either local or session)

Can anyone help me out with the code to get that extra column displayed in my query with the variable names in it?
(Be good if they were concatenated as I have done with the parameters but that’s not too important.)

I would stress that the databases are held on a remote shared server so I cannot log in as root, nor can I access it as local host. ie for example, access to tables like ‘proc’ and ‘users’ and others that would be available to an admin user are denied

The code I have so far that returns most of the information (also included in case it’s useful to anyone else)

SELECT routine_name, routine_type, routine_param_list,routine_returns,routine_body, routine_comment FROM (

SELECT
  
    `information_schema`.`events`.`EVENT_NAME` AS `routine_name`,
    'EVENT' AS `routine_type`,
    '' AS `routine_param_list`,
    '' AS `routine_returns`,
    `information_schema`.`events`.`EVENT_DEFINITION` AS `routine_body`,
    `information_schema`.`events`.`EVENT_COMMENT` AS `routine_comment`

  FROM
    `information_schema`.`events`
   
    
    
  UNION
  ALL
  
  SELECT
   
    `information_schema`.`routines`.`ROUTINE_NAME` AS `routine_name`,
    `information_schema`.`routines`.`ROUTINE_TYPE` AS `routine_type`,
    (SELECT
      GROUP_CONCAT(
        CONCAT(
          `information_schema`.`parameters`.`PARAMETER_MODE`,
          ' ',
          `information_schema`.`parameters`.`PARAMETER_NAME`,
          ' ',
          `information_schema`.`parameters`.`DTD_IDENTIFIER`
        ) SEPARATOR '; '
      )
    FROM
      `information_schema`.`parameters`
    WHERE `information_schema`.`parameters`.`SPECIFIC_NAME` = `information_schema`.`routines`.`ROUTINE_NAME`
    GROUP BY `information_schema`.`parameters`.`SPECIFIC_NAME`) AS `routine_param_list`,
    `information_schema`.`routines`.`DTD_IDENTIFIER` AS `routine_returns`,
    `information_schema`.`routines`.`ROUTINE_DEFINITION` AS `routine_body`,
    `information_schema`.`routines`.`ROUTINE_COMMENT` AS `routine_comment`

  FROM
    `information_schema`.`routines`
   

    UNION
    ALL
    
    (SELECT
      
      `information_schema`.`tables`.`TABLE_NAME` AS `routine_name`,
      `information_schema`.`tables`.`TABLE_TYPE` AS `routine_type`,
       NULL AS `routine_param_list`,
      '' AS `routine_returns`,
       CONCAT(
        'Algorithm: ',
        `information_schema`.`VIEWS`.`ALGORITHM`,
        CHAR(13),
        CHAR(10),
        CHAR(13),
        CHAR(10),
        `information_schema`.`VIEWS`.`VIEW_DEFINITION`
      ) AS `routine_body`,
      '' AS `routine_comment`

    FROM
      (
        `information_schema`.`tables`
        LEFT JOIN `INFORMATION_SCHEMA`.`VIEWS`
          ON (
            `information_schema`.`VIEWS`.`TABLE_NAME` = `information_schema`.`tables`.`TABLE_NAME`
          )
      )
    WHERE `information_schema`.`tables`.`TABLE_TYPE` = 'VIEW'
   )
    
    ) AS T
    
--   where routine_name = 'MyRoutine'
--    and routine_type = 'MyRouitineType'

Erro projeto c# MySQL net5

Por que meu projeto c# windows form MySQL net5, ao compilar e colocá-lo em outra máquina (computador), ele apresenta erros que não apresenta na máquina onde tá o projeto? Apresenta erros de input, de banco de dados…

mysql – Can we use proxysql 2.x for load balancing multiple Galera clusters?

We have recently introduced proxysql 2.x for loadbalancing a galera cluster in production environment. It is working great so far.

I have been tasked to separate staging cluster which is currently in same production instance.

Can I use the same proxysql instance to load balance staging cluster to avoid redundancy ?

I am bit hesitant at the same time considering proxysql 2.x does not really know anything about the cluster.

Has anyone of you used the same proxysql 2.x instance for multiple clusters? Are there any risks?

mysql – what is the difference between first and second execution of a query?

I notice that when I am executing queries the first time (i.e., just after mysqld_safe --user=mysql &), the queries are slower than when they are executed second time. When the query (source) is:

select 
    s_name, s_address 
from 
    SUPPLIER, NATION 
where 
    s_suppkey in ( 
        select 
            ps_suppkey 
        from 
            PARTSUPP 
        where 
            ps_partkey in (
                select 
                    p_partkey 
                from 
                    PART 
                where 
                    p_name 
                like 
                    'green%'
            ) and 
            ps_availqty > (
                select 
                    0.5 * sum(l_quantity) 
                from 
                    LINEITEM 
                where 
                    l_partkey = ps_partkey and 
                    l_suppkey = ps_suppkey and 
                    l_shipdate >= date '1993-01-01' and 
                    l_shipdate < date '1993-01-01' + interval '1' year
            )
    ) and 
    s_nationkey = n_nationkey and 
    n_name = 'ALGERIA' 
order by 
    s_name;

And I got:

> SELECT .....;
>   41.255s # <-------- first time
> SELECT .....;
>   6.242s # <-------- second time
> SELECT .....;
>   6.104s # <-------- third time

However, in some other times, the difference does not exist. For example, this query(source):

SELECT 
    s_name, count(*) as numwait 
FROM 
    SUPPLIER, LINEITEM l1, ORDERS, NATION 
WHERE 
    s_suppkey = l1.l_suppkey and 
    o_orderkey = l1.l_orderkey and 
    o_orderstatus = 'F' and 
    l1.l_receiptdate > l1.l_commitdate and 
    exists ( 
        SELECT * 
        FROM LINEITEM l2 
        WHERE 
            l2.l_orderkey = l1.l_orderkey and 
            l2.l_suppkey <> l1.l_suppkey
    ) and 
    not exists (
        SELECT * 
        FROM LINEITEM l3 
        WHERE 
            l3.l_orderkey = l1.l_orderkey and 
            l3.l_suppkey <> l1.l_suppkey and 
            l3.l_receiptdate > l3.l_commitdate
    ) and 
    s_nationkey = n_nationkey and 
    n_name = 'EGYPT' 
GROUP BY 
    s_name
ORDER BY 
    numwait desc, s_name 
LIMIT 100;

I got:

> SELECT .....;
>   3m9.264s # <-------- first time
> SELECT .....;
>   3m9.377s # <-------- second time
> SELECT .....;
>   3m7.287s # <-------- third time

MySQL version is 8.0.22
All configurations are default.

Why it is the case?

Thanks.

mysql – Cual es la pelicula con mayor existencias en inventario en cada tienda

Tengo 2 tiendas en las cuales se venden varias peliculas, y la mayoria son similares, necesito un codigo de consulta de SQL que me muestre el nombre de la pelicula misma pelicula que tiene mayor numero de copias por cada tienda, junto con la tienda y la cantidad exacta. Lo he intentado usando este:

select film.title as Pelicula, count(*) as Cantidad
from film, store
where  store.store_id = film.film_id 

pero solo da el nombre y la cantidad, y necesito tambien se muestre tanto la id de la tienda, como todas las tiendas

mysql – What is the name of the process of creating a new DB from an exported snapshot?

I need to set up an automated process that:

  1. creates a snapshot of an AWS RDS (MySQL) DB
  2. uses that snapshot to create a new RDS instance that will be used as an “analytics playground”

I’m looking for the proper name to call the process described in step 2 above. So I ask: what would the DBA community call the process of using a DB snapshot (export) to create a brand new database? Materializing? Restoring? Backing up? Something else?

linux – Importing big MySQL Database

I am trying to import a 6GB database on RHEL7.
I have been reading for
Here are my main settings in my.cnf:

debug-info = TRUE
max_allowed_packet=8200M;
net_buffer_length=1000000M;
post_max_size=4096M
max_exection_time = 60 * 60;
upload_max_filesize=6000M
read_buffer_size = 2014K
connect_timeout = 1000000
net_write_timeout = 1000000
wait_timeout = 1000000
memory_limit=6000M

After changing my settings I restarted the mysql service and then

mysql> SHOW VARIABLES LIKE '%max%';
+------------------------------------------------------+----------------------+
| Variable_name                                        | Value                |
+------------------------------------------------------+----------------------+
| binlog_max_flush_queue_time                          | 0                    |
| ft_max_word_len                                      | 84                   |
| group_concat_max_len                                 | 1024                 |
| innodb_adaptive_max_sleep_delay                      | 150000               |
| innodb_change_buffer_max_size                        | 25                   |
| innodb_compression_pad_pct_max                       | 50                   |
| innodb_file_format_max                               | Barracuda            |
| innodb_ft_max_token_size                             | 84                   |
| innodb_io_capacity_max                               | 2000                 |
| innodb_max_dirty_pages_pct                           | 75.000000            |
| innodb_max_dirty_pages_pct_lwm                       | 0.000000             |
| innodb_max_purge_lag                                 | 0                    |
| innodb_max_purge_lag_delay                           | 0                    |
| innodb_max_undo_log_size                             | 1073741824           |
| innodb_online_alter_log_max_size                     | 134217728            |
| max_allowed_packet                                   | 4194304              |
| max_binlog_cache_size                                | 18446744073709547520 |
| max_binlog_size                                      | 1073741824           |
| max_binlog_stmt_cache_size                           | 18446744073709547520 |
| max_connect_errors                                   | 100                  |
| max_connections                                      | 151                  |
| max_delayed_threads                                  | 20                   |
| max_digest_length                                    | 1024                 |
| max_error_count                                      | 64                   |
| max_execution_time                                   | 0                    |
| max_heap_table_size                                  | 16777216             |
| max_insert_delayed_threads                           | 20                   |
| max_join_size                                        | 18446744073709551615 |
| max_length_for_sort_data                             | 1024                 |
| max_points_in_geometry                               | 65536                |
| max_prepared_stmt_count                              | 16382                |
| max_relay_log_size                                   | 0                    |
| max_seeks_for_key                                    | 18446744073709551615 |
| max_sort_length                                      | 1024                 |
| max_sp_recursion_depth                               | 0                    |
| max_tmp_tables                                       | 32                   |
| max_user_connections                                 | 0                    |
| max_write_lock_count                                 | 18446744073709551615 |
| myisam_max_sort_file_size                            | 9223372036853727232  |
| optimizer_trace_max_mem_size                         | 16384                |
| parser_max_mem_size                                  | 18446744073709551615 |
| performance_schema_max_cond_classes                  | 80                   |
| performance_schema_max_cond_instances                | -1                   |
| performance_schema_max_digest_length                 | 1024                 |
| performance_schema_max_file_classes                  | 80                   |
| performance_schema_max_file_handles                  | 32768                |
| performance_schema_max_file_instances                | -1                   |
| performance_schema_max_index_stat                    | -1                   |
| performance_schema_max_memory_classes                | 320                  |
| performance_schema_max_metadata_locks                | -1                   |
| performance_schema_max_mutex_classes                 | 210                  |
| performance_schema_max_mutex_instances               | -1                   |
| performance_schema_max_prepared_statements_instances | -1                   |
| performance_schema_max_program_instances             | -1                   |
| performance_schema_max_rwlock_classes                | 40                   |
| performance_schema_max_rwlock_instances              | -1                   |
| performance_schema_max_socket_classes                | 10                   |
| performance_schema_max_socket_instances              | -1                   |
| performance_schema_max_sql_text_length               | 1024                 |
| performance_schema_max_stage_classes                 | 150                  |
| performance_schema_max_statement_classes             | 193                  |
| performance_schema_max_statement_stack               | 10                   |
| performance_schema_max_table_handles                 | -1                   |
| performance_schema_max_table_instances               | -1                   |
| performance_schema_max_table_lock_stat               | -1                   |
| performance_schema_max_thread_classes                | 50                   |
| performance_schema_max_thread_instances              | -1                   |
| range_optimizer_max_mem_size                         | 8388608              |
| slave_max_allowed_packet                             | 1073741824           |
| slave_pending_jobs_size_max                          | 16777216             |
+------------------------------------------------------+----------------------+

70 rows in set (0.01 sec)

$ mysql -u user -p database_name < database_dump.sql –force –wait –reconnect

ERROR 2006 (HY000) at line 4432: MySQL server has gone away
ERROR 2006 (HY000) at line 4433: MySQL server has gone away
...
ERROR 2006 (HY000) at line 5707: MySQL server has gone away

Salvar arquivo PDF Mysql – JAVA

Olá, estou sofrendo para salvar um arquivo pdf no banco de dados MYSQL. Na verdade eu converto o arquivo em um array de bytes (fiz um teste, e esse array está correto, pois até consigo fazer uma copia dele em outro diretório) e consigo salvar ele em um campo LONGBLOB no mysql. Meu problema é recuperar esse arquivo e salvar em um diretório, quando recupero do banco de dados. Ele salva o arquivo em pdf, porém ao tentar abrir, aparece a mensagem que o arquivo está corrompido ou não foi decodificado direito.

segue abaixo o codigo da conversão em array de bytes:

                            File f = new File("C:\SAFRAS\ticket.pdf");
                            InputStream is = new FileInputStream(f);
                            byte() bytes = new byte((int) f.length());
                            int offset = 0;
                            int numRead = 0;
                            while (offset < bytes.length
                                    && (numRead = is.read(bytes, offset, bytes.length - offset)) >= 0) {
                                offset += numRead;
                            }

segue script para salvar no banco de dados:

String sql = "INSERT INTO `documento_digitalizado` (`nome_arquivo`, `fk_parceiro`, `arquivo`) VALUES"
                    + " ('" + d.getNomeArquivo() + "', '" + d.getFkParceiro() + "', '" + Arrays.toString(d.getArquivo()) + "');";

e agora o que está me causando problemas, o código que uso pra recuperar o arquivo do banco e salvar em um repositório, o que gera o arquivo corrompido.

      ResultSet rs = Saude.busca(sql);
            if (rs != null && rs.next()) {
                d.setArquivo(rs.getBytes("arquivo"));

                // CONVERTE O ARRA DE BYTES EM FILE
                File f = new File("C:\SAFRAS\ticket_teste.pdf");
                FileOutputStream fos = new FileOutputStream(f);
                fos.write(d.getArquivo());
                fos.close();

Gostariam que me ajudassem a saber onde estou errando, se é na gravação no banco ou na recuperação do arquivo.
Desde já agradeço.