python 3.x – Dúvida em "continue" (SyntaxError: & # 39; continue & # 39; is not correctly in the loop)

Will I be in Python and will not I? Why did not I do it? You could do it?

def SecretNumber ():
GotIt = False
while GotIt == False:
One = int (entry ("Enter a number between 1 and 10:"))
Dos = int (entry ("Enter another number between 1 and 10:"))

yes (One> = 1) and (One <= 10):
    if (Two >= 1) and (Two <= 10):
print (& # 39; Your secret number is: & # 39; + str (One * Two))
GotIt = True
continue
plus:
print (& # 39; Second wrong value! & # 39;)
plus:
print ("First incorrect value!")
print ("Try again!")

***** File "cell_name", line 14
SyntaxError: & # 39; continue & # 39; it is not correctly in the loop *****

python 3x and Pandas How to group a DataFrame by week number?

I have a file in CSV format that contains a column with dates from the 1st. from January 2019 to the date.
Each row can have several times the same date.
I wish to group the data by week number, however for particular reasons the week does not start on Monday but on Thursday and ends on Wednesday.

the objective is that from the date of each line, a column called week is created in which you assign a number that corresponds to the week.

For the year 2019 the first of January is Tuesday, therefore this would be the 6th and the 2nd of January would be the 7th of the week 1.

From Thursday 3 until Wednesday 9 in the week column you should write the number 2 and so on.

What I have done is to decompose the date into several columns

date year month day day dia_mb

2019-01-01 2019 01 01 Tuesday 6
2019-01-02 2019 01 02 wednesday 7
2019-01-03 2019 01 03 Thursday 1

From this data, how can I create the new week column in the DataFrame, considering that each time the date increases and the data in the dia_mb column is 1, the week number increases?
real data

python 3.x – replace the if __name__ == '__main__': for some other function or explain this in this exercise

They could help me change the if __name__ == '__main__': or who knows how it works, would help me understand what it does in this code?

if __name__ == '__main__':
    print ("Enter the number of people:")
    n = int (input ())
    acum = 0
    acum2 = 0
    acum3 = 0
    people = 1.70
    for i in range (1, n + 1):
        print ("Enter the height of the", i, "person:")
        data = float (input ())
        if data> = people:
         acum2 = acum2 + 1
        if data <people:
         acum3 = acum3 + 1
        acum = acum + datum
    prom = cum / n

    print ("##############################)
    print ("People under 1.70 cm" "are:", acum3,)
    print ("##############################)
    print ("People older than or equal to the height of 1.70 cm" "are:", acum2,)
    print ("##############################)
    print ("The average height of the", n, "people are:", prom,)

Python 3.x – How to make loops in scrapy?

I'm scraping a website from the Dmoz site. I want to make a loop in functions because the in loop that I am using in each function, I will have to put again and again in each function. Although its functionality is the same. The second thing I want to solve is to make a loop in performance response. follow because if I scraped more pages, I'll have to write this again and again. Is there any way of my two problems. I tried several times but I failed.

                                # save and call another page
performance response.follow (self.about_page, self.parse_about, meta = {& # 39; items & # 39 ;: items})
performance response.follow (self.editor, self.parse_editor, meta = {& # 39; items & # 39 ;: items})

def parse_about (auto, answer):
# do your stuff on the second page
items = response.meta['items']
        names = {& # 39; name1 & # 39 ;: & # 39; Headings & # 39 ;,
& # 39; name2 & # 39 ;: & # 39; paragraphs & # 39 ;,
& # 39; name3 & # 39 ;: & # 39; 3 projects & # 39 ;,
& # 39; name4 & # 39 ;: & # 39; About Dmoz & # 39 ;,
& # 39; name5 & # 39 ;: & # 39; languages ​​& # 39 ;,
& # 39; name6 & # 39 ;: & # 39; You can make a difference & # 39 ;,
& # 39; name7 & # 39 ;: & # 39; additional information & # 39;
}

finder = {& # 39; find1 & # 39 ;: & # 39; h2 :: text, #mainContent h1 :: text & # 39 ;,
& # 39; find2 & # 39 ;: & # 39; p :: text & # 39 ;,
& # 39; find3 & # 39 ;: & # 39; li ~ li + li b a :: text, li: nth-child (1) b a :: text & # 39 ;,
& # 39; find4 & # 39 ;: & # 39; .nav ul a :: text, li: nth-child (2) b a :: text & # 39 ;,
& # 39; find5 & # 39 ;: & # 39; .nav ~ .nav to :: text & # 39 ;,
& # 39; find6 & # 39 ;: & # 39; dd :: text, # about-submit :: text & # 39 ;,
& # 39; find7 & # 39 ;: & # 39; li :: text, # about-more-info to :: text & # 39;
}
for name, search in zip (names.values ​​(), finder.values ​​()):
articles[name] = response.css (find) .extract ()
performance items

3.x python – Sort a QuerySet

can someone help me to sort a queryset I have this:

results = transaction.objects.filter (patient_paid = pac.id) .order_by ('examination_id'). distinct ('examination_id')

I want to reorder it for another field of another model, I have tried this but it does not work for me:

results_final = results.objects.order_by ('examen.grupoExamenes_id')

This I have also tried:

results = transactional.objects.filter (patient_quire_id = pac.id) .order_by ('examination_groupExamples_id', 'examination_id',). distinct ('examination_id')

Can anybody help me?

python 3.x – train test division based on values ​​of a column – sequentially

I have a data frame as below

df = pd.DataFrame ({"Col1": ['A','B','B','A','B','B','A','B','A', 'A'],
"Col2": [-2.21,-9.59,0.16,1.29,-31.92,-24.48,15.23,34.58,24.33,-3.32],
"Col3": [-0.27,-0.57,0.072,-0.15,-0.21,-2.54,-1.06,1.94,1.83,0.72],
"Y": [-1,1,-1,-1,-1,1,1,1,1,-1]})

Col1 Col2 Col3 and
0 A -2.21 -0.270 -1
1 B -9.59 -0.570 1
2 B 0.16 0.072 -1
3 A 1.29 -0.150 -1
4 B -31.92 -0.210 -1
5 B -24.48 -2,540 1
6 A 15.23 -1.060 1
7 B 34.58 1.940 1
8 A 24.33 1.830 1
9 A -3.32 0.720 -1

Is there any way to divide the data frame (division 60:40) so that the first 60% of the values ​​of col1 is train and the last 40% of test?

Train :

Col1 Col2 Col3 and
0 A -2.21 -0.270 -1
1 B -9.59 -0.570 1
2 B 0.16 0.072 -1
3 A 1.29 -0.150 -1
4 B -31.92 -0.210 -1
6 A 15.23 -1.060 1

Test:

            Col1 Col2 Col3 and
5 B -24.48 -2,540 1
7 B 34.58 1.940 1
8 A 24.33 1.830 1
9 A -3.32 0.720 -1

python 3.x – ValueError: Index contains duplicate entries, can not reshape

To import quotes from Yahoo Finance, I use the following script.

start = "2000-1-1"
#end = date.today ()
end = "2019-3-30"

os.chdir ("G: / Py_2019 / Py_micartera_POO / Ficheros_Yahoo")

"" "Column names of the csv file:" list_names "and" list_tickers "" ""
data = pd.read_csv ("lista_valores_yahoo.csv")

# Let's define a function to do it
def import (tickers, start, end):
    def data (ticker):
        return pd_data.DataReader (ticker, 'yahoo', start, end)
    datas = map (data, tickers)
    return pd.concat (datas, keys = tickers, names =['Ticker','Date'])

all_the_precios = import (data["lista_tickers"], start, end)
# Reset the index
prices_adjusted = all_the_prices[['Adj Close']].reset_index ()
# Now we put the cp of stocks in columns
close_daily = prices_adjusted.pivot ('Date', 'Ticker', 'Adj Close')
# Change the name of the columns
close_daily = close_diario.rename (columns = data["lista_nombres"])
selection = closing_daily['2016-01-01':'2019-03-30']
selection[:3]

It returns a datafarme correctly.

If, as an "end" date, I use "end = date.today ()", it returns the error

ValueError: Index contains duplicate entries, can not reshape

What is the explanation? I will appreciate your help.

3x Supermicro Dual E5-2670 | 128GB ECC RAM | RAILROAD | IPMI | 4x NIC | 2x 480GB SSD | $ 700 each

Sale of 3x Supermicro 4-Bay, Dual E5-2670, 2x480GB SSD, 128GB ECC REG DDR3, IPMI, rail kit for 800 $ PP / CC / Wire or 700 $ in Cryptocurrency, free shipping to any part of the continental territory of the USA

Good shape, fully functional, recent pulls, some minor scratches / dents typical of pallet stacking and installations.

We recently bought these servers for a project and decided to go in another direction where the servers are located in Quadranet Miami.

If you wish to buy them, we will provide you with a package, we will be monitoring this thread and can be contacted by private message or by email to tropihost@gmail.com

Regards,
TH

python 3.x – Multiprocessing of a for loop in time series

I want to use multiprocessing for a & # 39; For loop & # 39 ;. Basically, from another data frame, step ID, PID to the main data frame, so, depending on the ID, PID will filter the data frame and create a model. I tried, but it's taking me the same time.

Code:

from the time of import
import definitions
Multiprocessing import

yes __name__ == & # 39; __ main __ & # 39;
currtime = time ()
test = []
    for the index, row in temp_sub.iterrows ():
local_id = row["ID"] 
        local_pid = row["PID"]
        p = multiprocessing.Pool (multiprocessing.cpu_count ())
df = p.apply (defs.framing_df, (df, local_id, local_pid))
test.append (df)
print (& # 39; parallel: elapsed time: & # 39 ;, time () - currtime)


#temp_sub: small dataframe consists of ids and pids
#df: main data frame
#defs: wrote a framing_df function and saved it in the defs.py file

Where am I making the mistake, please let me know. It is working well, but it takes the same time as the sequential process.

python 3.x – Iterate through groups of rows with different indexed values

The data looks like this:

Data = {& # 39; group_id: [‘1′,’1′,’1′,’1′,’2′,’2′,’2’],
& # 39; fountain & # 39; [‘Twitter’, ‘Instagram’, ‘Twitter’, ‘Facebook’, ‘Facebook’, ‘Twitter’, ‘Instagram’, ‘Facebook’]
& # 39; Gravity & # 39; [4,2,7,4,8,9,3,5]}

I need:

1) Take the first row of the Severity Code of each group
2) Obtain the absolute value of all the rows (difference) of the identified severity code of each group (from # 1). Example: Group severity code 1 (4) … first row diff = 0; second row diff = 2; third row diff = 3; The same for group 2.
3) In each group, find the closest neighbor of each source, up to the severity of the first row.

I have identified the first row and indexed the severity code. When iterating, the code only uses the last indexed severity code to calculate the difference.

df = pd.DataFrame (Data)
first_row = b.groupby (['group_id']).First()
for the row in first1.itertuples (index = True, name = & # 39; Pandas & # 39;):
value = getattr (row, & # 39; Severity & # 39;)
df['dif'] = (df['Severity'] - value) .abs ()

I hope the output is in a Dataframe with a column & # 39; dif & # 39; added I can extract nearest neighbors in each group for each source where True. Repeat the process: extract the rows where True and pass the False to find additional rows with a new severity of the first row. Repeat again until there are no rows, or all rows are False.