python – Pig Latin Converter

I have written a program in the Pig Latin converter that I attach with Pastebin. I'm still a beginner, and I know the code works perfectly, but can we make some improvements to this code because I don't like to use interrupt declarations too much? Also, is there a way to make this code smaller? Looking for your kind suggestions. Thank you.

'''Pig Latin is a language constructed by transforming English words. While the ori-
 gins of the language are unknown, it is mentioned in at least two documents from
 the nineteenth century, suggesting that it has existed for more than 100 years. The
 following rules are used to translate English into Pig Latin:
• If the word begins with a consonant (including y), then all letters at the beginning of
the word, up to the first vowel (excluding y), are removed and then added to the end
of the word, followed by ay. For example, computer becomes omputercay
and think becomes inkthay.
• If the word begins with a vowel (not including y), then way is added to the end
of the word. For example, algorithm becomes algorithmway and office
becomes officeway.
Write a program that reads a line of text from the user. Then your program should
translate the line into Pig Latin and display the result. You may assume that the string
entered by the user only contains lowercase letters and spaces.

'''


def pig_latin(word):
    word = word.strip().lower()
    const_tail = 'ay'
    vow_tail = 'way'
    pig_latin =''
    vowel = ('a','e','i','o','u')
    for i in range(len(word)):
    if word(0) in vowel:
        pig_latin+=word+vow_tail
        break
    else:
        if word(i) in vowel:
            pig_latin+=word(i:)+word(0:i)+const_tail
            break
return pig_latin

def main():
    word = str(input('Enter the word: '))
    print('The pig latin translation of the string is',pig_latin(word))


if __name__ == "__main__":
    main()

co.combinatorics – graph constructed from orthogonal Latin squares

I have asked the following question on the MathExchange site, with a reward, without answer or comment. Maybe I would have additional comments here. The problem arose when reading some articles on finite geometry. I started to wonder if anyone had studied this before.

Reminder in Latin Square: Given a set $ S $ of $ n $ elements (we will use $ (n) $ in the following for simplicity), a Latin square $ L $ it's a function $ L: (n) times (n) a S $, that is, a $ n times n $ array with elements in $ S $, so that each element of $ S $ appears exactly once in each row and each column. For example,

Latin square

Leave $ L_1 $ and $ L_2 $ be two latin squares on the land sets $ S_1 $, $ S_2 $ respectively. They are called orthogonal yes for every $ (x_1, x_2) in S_1 times S_2 $ there is only one $ (i, j) in (n) times (n) $ such that $ L_1 (i, j) = x_1 $ and $ L_2 (i, j) = x_2 $. For example, the following are two orthogonal Latin squares of order 3.

enter the image description here

It is known that at most $ n-1 $ mutually orthogonal Latin squares of order $ n $, and that the limit is reached if and there is only one plane of a related order $ n $.

The definition of the graph: I am building a graph $ G_n $ with vertex sets the Latin squares of order $ n $ and two vertices are adjacent if the Latin squares are orthogonal.

I want to understand some properties of this graph. For simplicity, I consider squares up to the permutation of $ (n) $therefore w.l.o.g. all my squares have for the first line $ {1,2, ldots, n } $. In fact if I call $ H_n $ the graph is not up to the permutations, so $ H_n $ is he $ n! $ graphic explosion $ G_n $, or using the Tensor product $$ H_n = G_n times K_ {n!} $$
Since I am mainly interested in the color number of my chart, and we know that $ chi (H_n) leq min { chi (G_n); n! } $I will only study $ G_n $.

For example $ G_2 = K_1 $, $ G_3 = K_2 $.

I know that :

  • It is trivial that $ G_n $ it is not complete.
  • If there is a related order plane $ n $ then $ G_n $ contains $ K_ {n-1} $ as a subgraph, and $ chi (G_n) geq n-1 $.

  • $ G_4 $ is made of 2 disjoint $ K_3 $ and 18 isolated vertices, for a total of 24 Latin squares.

  • $ G_5 $ is made of 36 disjoints $ K_4 $ and 1,200 isolated vertices, for a total of 1,344 Latin squares.

  • The case $ n = 6 $ It would be the first interesting case, since there is no related place of order 6, therefore we will not find $ K_5 $ in $ G_6 $. It has been known since 1901 (from the hand of Tarry who checks all the Latin squares of order 6) that there are not two Latin squares of order 6 mutually orthogonal. So $ G_6 $ it is made only of isolated vertices (with 1,128,960 vertices).

  • It is also known that the case $ n = 2 $ and $ n = 6 $ they are the only ones with only isolated vertices. (See design theory of Beth, Jingnickel and Lenz).

  • From the article "Monogamo Latin Square by Danziger, Wanless and Webb, available on the Wanless website here. The authors demonstrate this for everyone $ n> 6 $Yes $ n $ it's not the way $ 2p $ for a cousin $ p geq 11 $, then there is a Latin square of order $ n $ that it has an orthogonal partner but it is not in any triple of mutually orthogonal Latin squares. Therefore our graph $ G_n $ will have something isolated $ K_2 $

I wonder the following:

Guess : For any $ n $, $ G_n $ it is the disjoint union of complete subgraphs (of different sizes).

Or in other words, the orthogonal relationship is transitive (when it is restricted to our Latin squares with the first row set to $ {1,2, ldots, n } $.

I would appreciate any insight, direction for some articles or any additional known fact.

Internationalization: are Pinyin, Chinese or Latin domain names more attractive to Chinese users?

I am interested in obtaining a domain to attract the attention of the Chinese, but I have some doubts.

What are the best domain names to attract the attention of people who speak and read Chinese? A domain in Pinyin, in Chinese characters or even in Latin characters? And, if you have the site name in Pinyin, do you have the same domain name in Chinese characters?

html5: is it correct that I no longer need to escape extended latin characters in HTML when I use UTF-8 encoding?

Some background first:

2003 – 2010

In 2003, I changed HTML 4.01 to XHTML 1.0 and encoded my
XHTML documents, using:



2010 – 2013

In 2010, I changed XHTML 1.0 to HTML5but why
the text editor I was using at the time did not allow me to save text
documents in UTF-8I kept using ISO-8859-4.

The utility of saving documents in UTF-8 it only became more apparent to me at first
2013, when I started working on a project about Iceland,
which implies the frequent use of the characters:

  • æ / / Æ (ash)
  • ð / / Ð (eth)
  • þ / / Þ (Thorn)

and many accented vowels (á, é, í, ó, ú, ý)


2013 – present

So in 2013 I found a new text editor that allowed me to save
documents using UTF-8 encoding and I started using:



Here is the key point:

During 2003-10 and 2010-13, on the rare occasion that he needed to show an extended Latin character like (â, é or ü), I always used the standard HTML escapes (or HTML entities) I like it:

As was already a habit, after finishing my Icelandic project in 2013, every time I wrote, saved, edited and loaded UTF-8 encrypted HTML5 documents, I kept using:

  • ß, ä, ö etc. if you were writing something in German;
  • ñ, á, ó etc. if I was writing something in Spanish;
  • ç, è, ô etc. if I was writing something in French

etc.


In my head, I had the idea that both were safer and better use an HTML entity whenever possible. (Maybe that came from knowing that it is always better to score & rather than & and certainly safer to dial ' rather than ')

But recently I have come across the following statements (opinions?):

Unnecessary use of HTML character references can significantly reduce
HTML readability. If you choose character encoding for a web page
appropriately, then HTML character references are usually just
required to mark delimiting characters
(<, >, "
Y &)

Source: Character encodings in HTML by Wikipedia

  • Generally, you don't need to use HTML character entities if your editor supports Unicode.
  • best practice is to forgo the use of HTML entities and use the real UTF-8 character
  • If your pages are encoded correctly in utf-8, you shouldn't need html entities, just use the characters you want directly.

Source: When should HTML entities be used? in stack overflow


Overall, I conclude that for most of the 2010s (or certainly in the early 2010s) it was probably still safer dial ö rather than ö because a document can be retrieved by a user agent (an older screen reader, for example) who did not understand UTF-8.

But I am concluding that in 2020, UTF-8 it is now as well established as the standard web encoding which is now definitely sure default to dial ö in a document saved as UTF-8 and although I can still continue to use HTML entities like &, <, ' etc. I no longer need to worry about using HTML entities like à Y ê for extended Latin characters.

Is this correct?

Handling of Latin characters extended in URL correctly with PHP and percentage encoding

I am currently tying in knots about the percentage URL encoding of extended Latin characters.


I have the following URL:

https://example.com/fußgängerbrücke/

The name of the offline folder (which I have uploaded via FTP) corresponds exactly to this: fußgängerbrücke


Wherever this URL has an internal link that points to it anywhere on the site, the link now takes the form coded in percentage:


If I cut and paste the URL from the URL bar in Firefox, it is pasted as: fu%C3%9Fg%C3%A4ngerbr%C3%BCcke


But… If I now use PHP to take the URL:

$My_URL = $_SERVER('SCRIPT_NAME');

and then use $My_URL To retrieve some related data:

file_get_contents($_SERVER('DOCUMENT_ROOT').$My_URL.'/my-data.php');

It does not work.

I thought about this and concluded that this could be what is happening:

  • file_get_contents() Are you (in any way?) Detecting the Latin characters extended in $My_URL and analysis fußgängerbrücke how fu%C3%9Fg%C3%A4ngerbr%C3%BCcke. (Can this be correct?)
  • Then look for a folder that literally exists as /fu%C3%9Fg%C3%A4ngerbr%C3%BCcke/ and I can't find it, because the only folder that exists is /fußgängerbrücke/

So, to test this hypothesis, I tried:

file_get_contents($_SERVER('DOCUMENT_ROOT').urldecode($My_URL).'/my-data.php');

which works (Hooray!), But … well, it seems strange.

I felt uncomfortable with this, since I am not trying (and do not need) to decode URLs encoded by percentage anywhere else on the site, in any other context, and this only makes a strange exception. For the sake of consistency, I prefer to use percentage coding everywhere.


Then … I went back to the original offline folder and renamed it fußgängerbrücke to fu%C3%9Fg%C3%A4ngerbr%C3%BCcke and then loaded it and replaced the previous folder.

Guess what…? The new URL is not resolved!

(Presumably because the server now automatically decodes the encrypted percentage encoding and tries to find the folder /fußgängerbrücke/ in the web space … that is not there.


So what am I missing here? Two questions:

  • Make file_get_contents() Automatically encode percent extended Latin characters that must then be explicitly decoded percentage again?
  • Is it impossible to have URL folder names and file names that have already encoded percentage encoding?

Canonical URL: Should the Latin characters extended in the URLs (ü, ö, etc.) be coded with a percentage as standard?

I am putting together a site in English that contains its own translation into German (don't worry, I have lived in Germany and I have a degree in Germanic and Slavic studies, it is the right German …).

I wonder what is the best practice regarding the Latin characters extended in the URLs.

If I have a URL like:

https://example.com/fußgängerbrücke/

Is it better to link it internally as:

  • a) /fußgängerbrücke/
  • yes) /fu%C3%9Fg%C3%A4ngerbr%C3%BCcke/
  • C) /fussgaengerbruecke/

I have no problem doing any of the above and I am very happy to use .htaccess mod_rewrite if necessary to ensure that all variants 301 to the correct canonical page.

On that note, a secondary question: what format (if different) should I use for the in the ?

Promote your music featured in our Reggaeton 24k audience music for $ 20

If you need to promote your electronic music on a playlist with an active audience, this is the service you should use.

Our playlist Electro Music ???? It has more than 11k followers and continues to increase weekly.

When you buy this service, your song will be added to the playlist for 1 month at an intermediate position within the playlist.

Playlist: look at the image

We only accept electronic music in this playlist.

by: dannymendez
Created: –
Category: Audio and Music
Viewed: 247


.

python – Translate English to Latin Latin | PIG_LATIN.PY

First, at the top, list all consonants. There are two things that can be improved here:

  • Since he only uses it to verify if something is a consonant or not, it must be a set. It is much more efficient to seek a membership in a set than to do it in a list. Simply replace the () with {}.

    consonants = {'b', 'c', 'd', 'f', 'g', 'h', 'j', 'k', 'l', 'm', 'n', 'p', 'q', 'r', 's', 't', 'v', 'w', 'x', 'y', 'z'}
    
  • Second, there is a slightly less painful way to generate those letters. Python string the module contains a ascii_lowercase built-in holding 'abcdefghijklmnopqrstuvwxyz'. You can use that together with a set of vowels to limit the letters to be encoded:

    import string as s
    
    vowels = {'a', 'e', 'i', 'o', 'u'}
    consonants = set(s.ascii_lowercase) - vowels  # Consonants are the set of letters, minus the vowels
    

    I personally prefer this way.

You could also simply change your algorithm to use vowels directly.


Just to clarify something,

word_copy = word

make do not create a copy of word. This only creates a middle name for the word rope. For chains this does not matter because the chains are immutable, but with a mutable object, this will bite you:

my_list = ()
list_copy = my_list  # Does not actually create a copy!
my_list.append(1)
print(my_list, list_copy)  # prints (1) (1)

Notice how both of them Lists added to. This happens because there is really only one list. Both names refer to the same list.

For the sake of clarity, I would change the name to say what it is purpose is. However, I can't see the need for word_copy absolutely! It would make sense if it were used as an accumulator for a loop or something, but the only time it is used is in word_copy(0), and as you never reallocate word, you could just do word(0). I would just get rid of word_copy.

Along the same lines, I would reconsider ay. The name you have given it is exactly as descriptive as the string it contains, and is only used in one place. At least, I would change it to something significant:

pig_latin_suffix = ('a', 'y')

I will also notice that there is no reason to use a string list here instead of a multiple character string. They behave the same in this case:

" ".join(('a', 'y'))
'a y'

" ".join("ay")
'a y'

Strings are iterable as are lists.


I think pig_latin It's too big. He is doing two main jobs: divide the message into words and process the words. It would make the processing step its own function:

def process_word(word):
    ay = ('a', 'y')
    listed_word = list(word)
    word_copy = word
    moved_consonants = ''

    for letter in listed_word.copy():
        if letter.lower() == 'y':
            if letter in word_copy(0):
                moved_consonants += letter
                listed_word.remove(letter)
            else:
                break
        elif letter.lower() in consonants:
            moved_consonants += letter
            listed_word.remove(letter)
        else:
            break

    listed_word.append('-' + moved_consonants + ''.join(ay))

    return ''.join(listed_word)

def pig_latin(message):
    new_message = ''

    for word in message.split():
        processed_word = process_word(word)

        new_message += processed_word + ' '

    return new_message

process_word It could be discussed further. However, this is already much better. The immediate benefit is that you can now test individual words and not have to worry about how the rest of the code will react:

print(process_word("Can"))  # Prints 'an-Cay'

Tighter lower limit of the lower triangular sum of an arbitrary Latin square

In this question from math.stackexchange.com I look for a more strict limit than the one I presented there in the question. Rob Pratt raises a conjecture in his answer motivated by the double problem of relaxation of the entire linear program. Can anyone submit a right limit or provide proof of Rob Pratt's conjecture?

1637

BlackHatKings: Proxy Lists
Posted by: Afterbarbag
Time of publication: June 11, 2019 at 02:28 p.m.