Redirect a list of URLs to another URL, using functions.php

What i want to do

I have a series of WordPress URLs that I need to redirect, along with a 301 permanent redirect header that is sent to the browser.

The URLs to redirect are:

https://www.mydomain.com.au/search-result/?location=victoria

https://www.mydomain.com.au/search-result/?location=new-south-wales

https://www.mydomain.com.au/search-result/?location=queensland

https://www.mydomain.com.au/search-result/?location=south-australia

https://www.mydomain.com.au/search-result/?location=tasmania

https://www.mydomain.com.au/search-result/?location=northern-territory

Where to redirect to

I want to redirect them to the home page: https://mydomain.com.au/

I'm not sure if it's better to do a test for all six location= strings, or just to test the location= string that is not to redirect.

The one that is not to redirect is ?location=western-australia. E.g.,

https://www.mydomain.com.au/search-result/?location=western-australia

Additional considerations

Please note that there are other /search-result/ URLs that have different variables in the query strings, such as ?weather=... or ?water=.... For example, https://www.mydomain.com.au/search-result/?location=victoria&weather=part-shade&water=&pasture=

As seen in that example, there may also be multiple variables in the query string, like ?location=tasmania&weather=&water=moderate&pasture=.

So I need to test the presence of the above locations= regardless of whether or not it has other variables afterwards. the location= The variable will always be the first in the query string.

I'm thinking it can be as simple as trying /search-result/ Y victoria; tasmania; northern-territory; etc. in the url

A different approach?

Would it make sense to do this using a .htaccess redirect, instead of having WordPress do it? I'm not sure of the advantages or disadvantages of each approach.

Sharepoint adds spam to URLs in calendar list item

Sharepoint adds all sorts of items from the Sharepoint unwanted punctuation list like "&" to URLs and they are not very useful.

Example:

https://www.google.com/?gws_rd=ssl

actually becomes:

https://www.google.com/?gws_rd=ssl

Is there a quick fix?

Facebook URLs shared on Twitter: link preview is not generated and user has to write snippet by himself

When users share a Facebook URL on Twitter, a link preview is not generated
and the user must write the snippet themselves.
(For details on the failure, connect, for example, https://www.facebook.com/ https://cards-dev.twitter.com/validator )

On the other hand, when users share a Twitter URL on Facebook, everything works fine.
(Check with, for example, https://developers.facebook.com/tools/debug/?q=twitter.com )

I don't own Facebook, so I can't put the Open Graph tags mentioned in
https://developer.twitter.com/en/docs/tweets/optimize-with-cards/guides/getting-started#twitter-cards-and-open-graph
return (?) to what Facebook sends to Twitter. I also can't tell Twitter what it is
You should now search the HTML that Facebook sends.

Outlook invite URLs through the web without picking up the approved start and end date in the dtstart and dtend parameters

url https://outlook.live.com/owa/?rru=addevent&dtstart=20200330T011159&dtend=20200330T031159&title=&location= incorporateurl ►&description=&allday=false

Redirect all requested URLs to a new URL

I want to redirect all requested URLs in WordPress to a new URL, for example:
domain.com/c/camera
to
domain.com/product-category/camera

How can I do this in htaccess?

python: drop multiple urls with bs4

I am trying to compile patent files from USPTO website with BeautifulSoup.

& # 39; & # 39; & # 39;

df('link')
urls=df('link').to_numpy()
urls
for i in urls:
    page = requests.get(i)
    ## storing the content of the page in a variable
    txt = page.text
    ## creating BeautifulSoup object
    soup = bs4.BeautifulSoup(txt, 'html.parser')
    soup

& # 39; & # 39; & # 39;

however, it only prints one of the URLs, not all 5 links. I NEED the 5 links discarded as text.

Any suggestion appreciated. Health

& # 39; & # 39; & # 39;

array(('http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&co1=AND&d=PTXT&s1=g06n.CPCL.&OS=CPCL/g06n&RS=CPCL/g06n',
       'http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=2&f=G&l=50&co1=AND&d=PTXT&s1=g06n.CPCL.&OS=CPCL/g06n&RS=CPCL/g06n',
       'http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=3&f=G&l=50&co1=AND&d=PTXT&s1=g06n.CPCL.&OS=CPCL/g06n&RS=CPCL/g06n',
       'http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=4&f=G&l=50&co1=AND&d=PTXT&s1=g06n.CPCL.&OS=CPCL/g06n&RS=CPCL/g06n',
       'http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=5&f=G&l=50&co1=AND&d=PTXT&s1=g06n.CPCL.&OS=CPCL/g06n&RS=CPCL/g06n'),
      dtype=object)

& # 39; & # 39; & # 39;

Can I scratch keywords from URLs?

Hi, I hope everyone is safe and stays well. I would like to know if there is a way to use Scrapebox to scrape the keywords that other websites (URLs) are using. If so, how would you do it?

docker: Nginx rewrites URLs to match proxy address

I am running a dockable WordPress container, the site is accessible through the host machines port 8000 If I go to localhost: 8000 boom, I can see my wordpress site.

It is boring to always write localhost:8000 to see my website, so I decided to order nginx as a reverse proxy for my site. I have configured a virtual host in nginx which has the name proxy.site , now I can access through the WordPress site visiting http://proxy.site.

Up to this point, we are very good, when http://proxy.site opens, I can see a list of my blog posts, let's say I want to read my last blog post about COVID-19, when I click on the link, ohohohoho opens as http://localhost:8000/posts/covid19

I want it to open with the proxy url like in http://proxy.site/posts/covid19 I need the entire site to be accessible through http://proxy.site name of the site,

I need nginx to rewrite all my links in localhost:8000/* to proxy.site/* , no body loves to write ports when accessing a blog,

This is what my nginx conf file looks like

server {
        listen 80;
        listen [::]:80;

        root /var/www/proxy.site/html;
        index index.html index.htm index.nginx-debian.html;

        server_name proxy.site www.proxy.site;

        location / {
                proxy_pass http://localhost:8000;
                #proxy_set_header HOST $host;
                #proxy_redirect http://localhost:8000/ http://proxy.site/ ;
                #try_files $uri $uri/ =404;
        }
}

How do I rewrite all the URLs on the proxy site with my custom hostname?

 server {
        listen 80;
        listen [::]:80;

        root /var/www/proxy.site/html;
        index index.html index.htm index.nginx-debian.html;

        server_name proxy.site www.proxy.site;

        location / {
                proxy_pass http://localhost:8000;
                #proxy_set_header HOST $host;
                #proxy_redirect http://localhost:8000/ http://proxy.site/ ;
                #try_files $uri $uri/ =404;
        }
}

How do I rewrite all the URLs on the proxy site with my custom hostname?

How to get all listing urls from home page with python web scraping

I wrote a code for web scraping, my code is fine except two problems. From the details page, everything is fine, only ISBN NO, and from the main page, I need all the listing urls so that my code can remove the date from a listing. Guide me, how can I solve this problem? The URLs (main page and detail page) are in the code. Thank you!

Here is my code:

import requests
from bs4 import BeautifulSoup
import csv

def get_page(url):
    response = requests.get(url)

    if not response.ok:
        print('server responded:', response.status_code)
    else:
        soup = BeautifulSoup(response.text, 'html.parser') # 1. html , 2. parser
    return soup

def get_detail_data(soup):

    try:
        title = soup.find('span',class_="title product-field",id=False).text
    except:
        title = 'empty'  
    print(title)
    try:
        writer = soup.find('a',class_="contributor-name",id=False).text
    except:
        writer = 'empty'  
    print(writer)   
    try:
        original_price = soup.find('div',class_="original-price",id=False).find('span').text
    except:
        original_price = 'empty'  
    print(original_price)  
    try:
        active_price = soup.find('div',class_="active-price",id=False).find('span').text
    except:
        active_price = 'empty'  
    print(active_price)     
    try:
        img = soup.find('div',class_="image-actions image-container product-type-icon-container book",id=False).find('img').attrs('src')
    except:
        img = 'empty'  
    print(img)   
    try:
        isbn = soup.find('div',class_="bookitem-secondary-metadata",id=False).find('li').attrs('ISBN: ')
    except:
        isbn = 'empty'  
    print(isbn) 
    data = {
        'title'             :   title,
        'writer'            :   writer,
        'original_price'    :   original_price,
        'active_price'      :   active_price,
        'image'             :   img,
        'isbn'              :   isbn
    }
    return data

def get_index_data(soup):
    titles_link = soup.find_all('a',class_="body_link_11")
    try:
        inks = soup.find('div', class_="item-info",id=False).find('p').find('a').get('href')
    except:
        inks = "empty"
    print(inks)

def main():
    #detail_page_url = "https://www.kobo.com/ww/en/ebook/mum-dad-1"
    mainurl = "https://www.kobo.com/ww/en/list/new-hot-in-fiction/youL53408U25RHrVu3wR5Q"
    #get_page(url)
    #get_detail_data(get_page(detail_page_url))
    get_index_data(get_page(mainurl))
if __name__ == '__main__':
    main()

possible new destination URLs for current accounts.

I keep getting the same message thousands of times, I have removed / locked the domain / url in the global system as well as in the specific project (after deactivating all projects except one to isolate the problem)
the same message over and over again is: –
15:46:00: (-) 1/1 PR-0 too low – http://www.gomaze-play.de/index.php?page=Register&action=register
15:46:00: (+) 001 possible new destination URLs for current accounts.
is already listed in
project> options> skip sites with the following words in url / domain