sql server – Fatal Python Error

Hola descargue la verisón 3.7.0 de 64 bits y la estoy usando en visual code estudio. cuando la abro en simbolo del sistema y lo trato de ejecutar me sale esta error:

Fatal Python error: initfsencoding: unable to load the file system codec
ModuleNotFoundError: No module named ‘encodings’

Current thread 0x00002320 (most recent call first):

y tambien cuando lo abro desde visual code lo ejecuta desde el servidor de SQL ¿cómo puedo evitar eso para que se tarde menos?

sql server – Quick Question on sleeping session with open transaction

For one of our SQL servers there has been a proc which creates a blocking mess:

That stored proc completes under a sec most of the times but somehow is leaving transaction open. What i see from sp_whoisactive is status = sleeping and open tran 1 for duration of approx 5-6 minutes. In this duration heaving blocking chain shows up.

This SP do not have any transaction like BEGIN and END TRAN. It does some basic select col,col2,col3…. into #temptables from table1 inner join table 2 … and then select from that #temptable

While we are checking from app why there could be a transaction left open, i was reading in such scenarios to use XACT ABORT ON in SP itself. But when there is no transaction involved how XACT ABORT setting will help in this case?

Please advise

SQL Server Agent unable to view network drives

I am unable to get agent jobs to output to a network path. I have pushed the IT guy to set up a domain authenticated user that logs in when the agent starts. That login does have access to the domain and is able to see the network drives. If I set the location of the output file to be the local c: then this works without issue. However if I set the drive to be a network location I get the following message;

(SQLSTATE 01000) (Message 0)Unable to open Step output file. The step succeeded.

Any help would be very much appreciated

t sql – How do you get more speed from merging two records into one? T-SQL

I have a table of time punches. They need to be matched up, “in” punch with “out” punch in order to locate any missing punches. I have a way to do that, but it’s slow, 45ish seconds for 700ish records, aka way too slow.

This is simple version of the punch table.

create table tbl_punches (
   LocationID  int NOT NULL,
   UserID  int NOT NULL,
   PunchType varchar(3) NOT NULL, -- "IN" or "OUT" 
   PunchDT datetime NOT NULL

To make the missing punch report, I have a stored proc with a cursor call that looks kind of like this (I’m simplifying it for here):

DECLARE cur_punches
     FOR  select  LocationID,
                  PunchDT as InPunchDT,
                  NULL as OutPunchDT
          from tbl_punches 
          where PunchType = 'IN'
          select  LocationID,
                  NULL as InPunchDT,
                  PunchDT as OutPunchDT
          from tbl_punches 
          where PunchType = 'OUT'
          order by LocationID, UserID, InPunchDT, 

Then I step through the cursor and put the report together by matching the “In” and “Out” records to build a single record that is inserted in a temp table. In the end, pull the contents of that temp table and that’s the missing punch report.

Is there a faster way to do this kind of thing, maybe without a cursor?

sql – INDEXAR 2 o 3 columnas, solo 2 son muy usadas. OPTIMIZAR busqueda MySQL

Mi duda es si tengo por ejemplo la tabla con los siguientes campos:


y mi consulta usualmente sera unidad-fecha pero aveces puede servir agregar a la consulta idcliente, es malo indexar las 3 columnas porque solo se usara poco (idcliente) o no hay diferencia con la indexacion de unidad-fecha vs unidad-fecha-idcliente ?

postgresql – Join or multiple sql queryis

I’m wondering what is the most effective(fast and less work for database) way to retrieve data.
I have 2 tables:


1  | John
2  | Mike
3  | Jack


2       | some_data_1
3       | some_data_2

I need to get value by user name and as i understand i have two options:

  1. select id from users where name ='some_name' and then select from data table by id.


  1. select id from users join data on users.id = data.user_id where user.name = 'some_name'

Also, i guess it’s important to note: those are example of real tables with thousands of rows and few more columns, there is an index on user_id column in data table, I’m using jdbc driver under and it’s a network call, and it’s postgreSQL if it matters.

query – Why is there just one HAVING in SQL?

The distinction between the where-clause and having-clause, is that one filters the rows before aggregation, the other after. They are essentially the same filtering operation, just performed at different stages in the evaluation of the query.

The reason why there cannot be multiple such clauses, evaluated one after another in order of their appearance (similar to joins), is simply an inherited limitation of the SQL language, the fundamentals of which were laid down in the 1970s.

The effect of multiple group-by- and having-clauses can still be achieved (at modest additional length) with nested queries (or “derived tables”).

Crucially however, in the original SQL language design, there was no facility for nested queries. Everything you wanted to achieve in a single statement, had to be done in a single flat query (i.e. with at most one appearance of each clause, and no nesting). Joins did not have their own separate clause.

I believe the facility to query nested tables was only introduced in SQL 92 (and the with-clause, defining “common table expressions”, was introduced only in SQL 99). The reason the facility didn’t exist before then was because database engines couldn’t handle the complexity of parsing and executing such queries – you could either use stored procedures with multiple statements and intermediate tables, or do the processing in application code.

But the need to filter rows before and after aggregation, was obviously considered such an important need from the outset, that the where-clause and having-clause were incorporated as part of the original (i.e. pre-standardised) design of the SQL language.

object oriented design – Frameworks for creating dynamic SQL

We are trying to create a Data platform where users are uploading data and creating dashboards.
I am trying to abstract the SQl execution process. Currently, a custom object is created for every request, which is then converted into an SQL using string builders.
Are there any frameworks that can help in the dynamic SQL creation?

I checked out MDX which works on OLAP which has a predefined schema, trying to research frameworks that on run time can take the schema and help in the SQL generation.

Are there any frameworks which support such a requirement.

sql server – Reorganizing/rebuilding indexes on 460 THOUSAND tables?

I’ve been tasked with maintaining an old (SQL 2005; no, I can’t upgrade it!) server which has 461,000 tables and 1.15M indexes. How do I go about maintaining those indexes?

My first thought is to create a list (stored in a table) of indices, having these attributes:

  • schema name
  • table name
  • index name
  • page count (null at first)
  • frag pct (null at first)
  • date rebuilt/reorganized (null at first)

From there, I would — in as many tables as I can analyze each night — query sys.dm_db_index_physical_stats and update the table. Eventually I’d have a list of all tables and their frag statistics. After that, I can defrag (reorg or rebuild as necessary) all the indexes which I deem require it.

Are there any better ways to do this?

Aplicacion en Ado.net y Sql server

estoy desarrollando una aplicación que sincroniza datos de productos, clientes, pedidos etc de un e-commerce y un SQLServer, lo estoy haciendo mediante ADO.Net.
Esta Base de datos, es de un ERP, por lo que suele haber gente conectada, mi pregunta es:
Pueden surgir problemas si estoy insertando datos a la vez que alguien está haciendo una consulta a alguna de las tablas afectadas?