vb.net: initialize a Jarray with JObject that contains a progressive index

I need to initialize a Jarray with a JObject that contains an areanum property that is an incremental integer.

Right now I wrote the following code, but this works fine if I have a small number in JObject, otherwise it will take a long time.

I'm looking for a way to write the following code that takes into account performance at the moment

I need to initialize a JArray with n JObject that contains the areaNum which is an incremental number


            Dim panelsArray As New JArray()
            Dim panel As New JObject

            For i As Integer = 1 To 5
                panel = New JObject(                    
                    New JProperty("areaNum",            i),
                    New JProperty("numbers",            New JArray({New JArray({}),New JArray({}),New JArray({})})),
                    New JProperty("systems",            New JArray({""})),
                    New JProperty("multiplier",         New JArray({""})),
                    New JProperty("qp",                 false),
                    New JProperty("series",             false),
                    New JProperty("noseries",           false),
                    New JProperty("twodigit",           false),
                    New JProperty("onedigit",           false),
                    New JProperty("void",               false))     
                panelsArray.ad(panel,panel)
            Next i
```vb


python – Exception has occurred: IndexError list index out of range. Enter data in a split

-The type of error is this: Exception has occurred: IndexError
list index out of range on line 43

  • When: When I enter the value 30 in voucher Park = input ("How much time do you want from blue zone, you can pay 30, 60, 90 or 120 minutes") line 40

I've given it several laps and I just didn't find the error, thank you very much in advance.

#El programa te calculara el descuento que obtendrás entregando tu coche, para comprarte uno nuevo.
#Si es de antes del 98 se le aplicará un descuento del 10% si es posterior al 2008 tendrás un descuento de 15%.
#Si es diesel sumale otro 10% y si es gasolina 12%
#El programa debe decirte el descuento final que vas a obtener para comprar tu nuevo coche.
#El programa debe pedir al usuario que marca quiere comprar, el concesionario trabaja con: ford,toyota o wolkswagen.
#El programa debe pedir al usuario si la marca del coche que compra es: electrico,hibrido o gas
#Segun el tipo que sea podra circular en nucleos urbanos con restriccion o no
#Los peajes tambien dependen de  electrico,hibrido o gas
#Impuesto circulación descuento  electrico,hibrido o gas
#Aparcar en zona azul valdra x y a partir de x, depende del tipo, valdra y.
modeloEntrega=input("Introduce la marca, el modelo de coche, el año, si es gasolina o diesel,SEPARADO POR COMASn")
entrada=modeloEntrega.split(',')
marca=str(entrada(0))
modelo=str(entrada(1))
anio=int(entrada(2))
gasolinaDiesel=str(entrada(3))
print (marca,modelo,anio,gasolinaDiesel)

if anio < 1998 and gasolinaDiesel == "diesel" or gasolinaDiesel == "Diesel":
    descuento= 0.10+0.10
    print (descuento)
elif anio > 2008 and gasolinaDiesel == "gasolina" or gasolinaDiesel == "Gasolina":
    descuento= 0.15+0.12
    print (descuento)
else:
    print ("No has introducido valores validos")

modeloCompra=input("Introduce la marca, el modelo, si es electrico, hibrido o gas, SEPARADO POR COMASn")
entrada=modeloCompra.split(',')
marcac=str(entrada(0))
modeloc=str(entrada(1))
tipo=str(entrada(2))
print (marcac,modeloc,tipo)
peajeHora=10

if tipo == "electrico":
    print ("Puede circular en núcleo urbano sin restriccionn")
    #print ("El minuto de zona azul vale 0,10")
    bonoAparcar=input("Cuanto rato quieres de zona azul,puedes pagar 30, 60, 90 o 120 minutos")
    entrada=bonoAparcar.split()
    mediaHora=int(entrada(0))
    unaHora=int(entrada(1))
    horayMedia=int(entrada(2))
    dosHoras=int(entrada(3))
    precioMinuto=10

    if bonoAparcar == 30 :
        bonoAparcar =  10 * 30
        descBono=input(int("Si no has estado los 30 minutos enteros, indica quantos has estado y te haremos el descuento"))
        preciobonoFinal=bonoAparcar - descBono

    elif bonoAparcar == 60:
        bonoAparcar =  10 * 60
        descBono=input(int("Si no has estado los 60 minutos enteros, indica quantos has estado y te haremos el descuento"))
        preciobonoFinal=bonoAparcar - descBono

    elif bonoAparcar == 90:
        bonoAparcar =  10 * 90
        descBono=input(int("Si no has estado los 90 minutos enteros, indica quantos has estado y te haremos el descuento"))
        preciobonoFinal=bonoAparcar - descBono

    elif bonoAparcar == 120:
        bonoAparcar =  10 * 120
        descBono=input(int("Si no has estado los 120 minutos enteros, indica quantos has estado y te haremos el descuento"))
        preciobonoFinal=bonoAparcar - descBono

elif tipo == hibrido:
    print ("Puede circular en núcleo urbano hasta 2 horas al dia")
    bonoAparcar=input("Cuanto rato quieres de zona azul, la fraccion horaria minima es media hora, y el máximo son 2 horas")

elif tipo == gas:
    print ("Puede circular en núcleo urbano hasta 4 horas al dia")
    bonoAparcar=input("Cuanto rato quieres de zona azul, la fraccion horaria minima es media hora, y el máximo son 2 horas")

How can I solve the problem to display index pages based on the display of the right side of the data in PHP?

Thank you for contributing a response to Stack Overflow!

  • Please make sure answer the question. Provide details and share your research!

But avoid

  • Ask for help, clarifications or respond to other answers.
  • Make statements based on opinion; Support them with references or personal experience.

For more information, see our tips on how to write excellent answers.

410 disappeared: is it good to put X-Robots-Tag for 410 pages that are still in the Google index?

No, there is no need for that. As John Mueller said in the Webmasters center

From our point of view, in the medium term / long term, a 404 is the same as a 410 for us. So, in both cases, we remove those URLs from our index.

It's normal if Google still crawls those URLs from time to time:

We will still come back and check again and make sure those pages have really disappeared or maybe the pages have come back to life.

If those pages are still indexed, it could be because they don't have much popularity and Googlebot doesn't crawl them very often. Simply wait or use the Remove URL tool to speed up the process.

sql server: the filtered index is not used when the variable is in WHERE condition

Why does MS SQL Server refuse to use the supported filtered index in this scenario?

-- demo data
CREATE TABLE #Test (
    ID INT IDENTITY(1,1) NOT NULL CONSTRAINT PK_Test_ID PRIMARY KEY
    ,Col1 NVARCHAR(36) NOT NULL DEFAULT NEWID()
    ,Col2 NVARCHAR(20) NOT NULL DEFAULT N''  -- !!
    );

WITH
    L0   AS(SELECT 1 AS C UNION ALL SELECT 1 AS O), -- 2 rows
    L1   AS(SELECT 1 AS C FROM L0 AS A CROSS JOIN L0 AS B), -- 4 rows
    L2   AS(SELECT 1 AS C FROM L1 AS A CROSS JOIN L1 AS B), -- 16 rows
    L3   AS(SELECT 1 AS C FROM L2 AS A CROSS JOIN L2 AS B), -- 256 rows
    L4   AS(SELECT 1 AS C FROM L3 AS A CROSS JOIN L3 AS B), -- 65,536 rows
    L5   AS(SELECT 1 AS C FROM L4 AS A CROSS JOIN L4 AS B), -- 4,294,967,296 rows
    Nums AS(SELECT ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) AS N FROM L5)
INSERT INTO #Test(Col2)
SELECT TOP 100000 N''
FROM Nums;

INSERT INTO #Test(Col2)
VALUES(N'ABC');

-- FILTERED index to support filter predicate of a query
CREATE NONCLUSTERED INDEX IX_Test_Col2_filtered ON #Test (Col2 ASC) WHERE Col2 <> N'';

-- just checking statistics
DBCC SHOW_STATISTICS('#Test', 'IX_Test_Col2_filtered')

-- condition on variable = index scan :-(
DECLARE @Filter NVARCHAR(20) = N'ABC'

SELECT Col1
FROM #Test
WHERE Col2 = @Filter
    AND Col2 <> N'';

enter the description of the image here

Everything goes as expected when literals are used.

-- condition on literal value - index seek + key lookup :-)
SELECT Col1
FROM #Test
WHERE Col2 = N'ABC';

enter the description of the image here

Sharepoint online – HillbillyCascade index in the form of a modern-style list?

I have a HibillyCascade Index functions in classic list forms that allow users to search for values ​​in an entry (HTML selection) to search for values ​​in a filtered value selected in a previous entry.

That's what I read that is called HillbillyCascade Index.

My problem is that I cannot add or link this code to a modern SharePoint experience. I read that it is not implemented in the modern list.

Is there any way to implement this in a list of modern experience? (as a JSON format column, etc.)

I can only work in client side mode.

seo – How can I index the site map of my website?

I want to optimize the site link on my website through the search console / webmaster

When I search on Google, I want certain menus to appear. for example menus, a, menu by menu c

I am looking for some references. that we can configure through the search console, it only requires a sitemap.xml that is first crawled

so I accessed this https://www.xml-sitemaps.com/. Enter my domain and click on the start button. After the process is complete, I downloaded the xml file sitemap

My question is whether I upload the xml file directly to my hosting or do I need to edit it first to establish what menus appear when writing keywords on Google?

How to optimize the count (*) using only the foreign key index in PostgreSQL

I have a web application that uses data tables that connect to the database to obtain data.
However, the problem always needs to first count the total results before you can paginate the data using limit Y offset

Are there any special indexes or some configurations to force as long as the count is calculated only by index without relying on the final table? because now, it seems, ignores the index when consulting the count.

sql server – Slow query despite index

issue

I need to create a graph of user retention over time, similar to this:
User retention image

Ignoring the percenages for a minute, I have a query that shows unique users for a given "cohort" and then the number of returning users. However, with the volume of data we have acquired in recent weeks, the query is no longer finalized.

Query

;WITH dates AS
(
-- Set up the date range
SELECT convert(date,GETDATE()) as dt, 1 as Id
UNION ALL
SELECT DATEADD(dd,-1,dt),dates.Id - 1
FROM dates
WHERE Id >= -84
)
, cohort as (
-- create the cohorts
SELECT dt AS StartDate, 
    convert(date,CASE WHEN DATEADD(DD, 6, dt) > convert(date,GETDATE()) THEN convert(date,GETDATE()) ELSE DATEADD(DD, 6, dt) END) as EndDate, 
    CONCAT(FORMAT(dt, 'MMM dd'), ' - ', FORMAT(CASE WHEN DATEADD(DD, 6, dt) > GETDATE() THEN GETDATE() ELSE DATEADD(DD, 6, dt) END, 'MMM dd')) as Cohort,
    row_number() over (order by dt) as CohortNo
FROM dates A
WHERE  DATEPART(dw,dt)=1
)
 , cohortevent as (
-- The complete set of cohorts and their events
select c.*, e.*
from cohort c
left join Event e on e.eventtime between c.StartDate and C.EndDate
)
, Retained as(
-- Recursive CTE that works out how long each user has been retained
select c.StartDate,c.EndDate,c.CoHort,c.CohortNo,c.EventId,c.EventTime,c.Count,c.UserID, case when Userid is not null then 1 else 0 end as ret
from cohortevent c
union all
select c.StartDate,c.EndDate,c.CoHort,c.CohortNo,c.EventId,c.EventTime,c.Count,c.UserID, ret+1
from cohortevent c
join Retained on Retained.userid=c.userid and Retained.CohortNo=c.CohortNo-1 and Retained.eventid

Ambient

All tables are CTE except "Event" which has two main columns that interest us, UserId and EventTime.

What i tried

I have added indexes in UserId and EventTime. I noticed that the DTUs (this is an instance of Azure SQL) were originally being maximized, but I have scaled the database instance vertically, so the database runs at 70% DTU usage and is not yet Complete in more than 30 minutes. Currently there are only 40k rows in Event.

Python: creating inverted index and publication lists takes a long time

I am working on an information retrieval project, where I have to process text data of ~ 1.5 GB and create a Dictionary (words, document frequency) and a list of publications (document identification, frequency of terms). According to the teacher, it should take around 10-15 minutes. But my code is working for more than 8 hours now! I tried a smaller data set (~ 35 MB) and it took me 5 hours to process it.

I am a Python newbie and I think it is taking a long time because I have created many Python dictionaries and lists in my code. I tried using the generator, but I'm not sure how to fix it.

file = open(filename, 'rt')
text = file.read()
file.close()

p = r'

.*?

' tag = RegexpTokenizer(p) passage = tag.tokenize(text) doc_re = re.compile(r"

") def process_data(docu): tokens = RegexpTokenizer(r'w+') lower_tokens = (word.lower() for word in tokens.tokenize(docu)) #convert to lower case table = str.maketrans('','', string.punctuation) stripped = (w.translate(table) for w in lower_tokens) #remove punctuation alpha = (word for word in stripped if word.isalpha()) #remove tokens that are not alphabetic stopwordlist = stopwords.words('english') stopped = (w for w in alpha if not w in stopwordlist) #remove stopwords return stopped data = {} #dictionary: key = Doc ID, value: word/term for doc in passage: group_docID = doc_re.match(doc) docID = group_docID.group(1) tokens = process_data(doc) data(docID) = list(set(tokens)) vocab = (item for i in data.values() for item in i) #all words in the dataset total_vocab = list(set(vocab)) #unique word/vocbulary for the whole dataset total_vocab.sort() print('Document Size = ', len(data)) #no. of documents print('Collection Size = ', len(vocab)) #no. of words print('Vocabulary Size= ', len(total_vocab)) #no. of unique words inv_index = {} #dictionary: key =word/term, value: (docid, termfrequency) for x in total_vocab: for y, z in data.items(): if x in z: wordfreq = z.count(x) inv_index.setdefault(x, ()).append((int(y), wordfreq)) flattend = (item for tag in inv_index.values() for item in tag) #((docid, tf)) posting = (item for tag in flattend for item in tag ) #(docid, tf) #document frequency for each vocabulary/words doc_freq=() for k,v in inv_index.items(): freq1=len((item for item in v if item)) doc_freq.append((freq1)) #offset value of each vocabulary/words offset = () offset1=0 for i in range(len(doc_freq)): if i>0: offset1 =offset1 + (doc_freq(i-1)*2) offset.append((offset1)) #create dcitionary of words, document frequency and offset dictionary = {} for i in range(len(total_vocab)): dictionary(total_vocab(i))=(doc_freq(i),offset(i)) #dictionary of word, inverse document frequency idf = {} for i in range(len(dictionary)): a = np.log2(len(data)/doc_freq(i)) idf(total_vocab(i)) = a with open('dictionary.json', 'w') as f: json.dump(dictionary,f) with open('idf.json', 'w') as f: json.dump(idf, f) binary_file = open('binary_file.txt', 'wb') for i in range(0, len(posting)): binary_int = (posting(i)).to_bytes(4, byteorder = 'big') #print(binary_int) binary_file.write(binary_int) binary_file.close()

Could someone help me rewrite this code to make it more computational and time efficient?

There are about 57982 documents like that.
Input File:

Background Adrenal cortex oncocytic carcinoma (AOC) represents an exceptional pathological entity, since only 22 cases have been documented in the literature so far. Case presentation Our case concerns a 54-year-old man with past medical history of right adrenal excision with partial hepatectomy, due to an adrenocortical carcinoma. The patient was admitted in our hospital to undergo surgical resection of a left lung mass newly detected on chest Computed Tomography scan. The histological and immunohistochemical study revealed a metastatic AOC. Although the patient was given mitotane orally in adjuvant basis, he experienced relapse with multiple metastases in the thorax twice.....

I am trying to tokenize each document by word and store the frequency of each word in a dictionary. Trying to save it in the json file.
Dictionary

word document_frequency offset
medical 2500 3414
research 320 4200

Also, generate an index where each word has a publication list of document ID and frequency of terms

medical (2630932, 20), (2795320, 2), (26350421, 31).... 
research (2783546, 243), (28517364, 310)....

and then save these posts in a binary file:

2630932 20 2795320 2 2635041 31....

with a offset value for each word. So when you load the list of publications from disk, you could use the search function to get the publication of each corresponding word.