exceptions: handling errors in the Nest service layer

I would like to create a REST API with NestJs. But I want to add GraphQL as another higher level layer later. So, for starters, I have the basic layer controller, the service and the TypeORM repository. Suppose you want to update a user's username by id. The controller path could be

PATCH /users/:id/username

Two problems may arise in the service or repository layer:

  • User ID may not exist
  • Username already exist

The basic flow of this operation would be

  • Get the user by id
  • Handle error if the user does not exist
  • Check if the username already exists
  • Handle error if username already exists
  • Update the user's username

I'm thinking about how I should handle those mistakes. I could throw exceptions immediately based on this concept

https://en.wikipedia.org/wiki/Fail-fast

NestJs provides some ready-to-use exceptions that I can use

https://docs.nestjs.com/exception-filters#built-in-http-exceptions

The problem is that I don't think I should throw HTTP exceptions in my service layer. They must be thrown into the logic of my controller. So what is a common approach to those mistakes?

  • I must return? undefined instead of an updated user? The controller would not know which part failed.
  • Should I create my own exceptions by extending Error and throw them away?
  • Due to the fact that exceptions come with poor performance if the return type of the function is something like ?

Errors updating to 7.2

for bayviewboom.org
The website now says:

The error log says:


(Tue Feb 11 12:04:19.696058 2020) (core:alert) (pid 10136) (client 167.220.25.254:25275) /home/booma/bayviewboom.org/getsimple/.htaccess: Wrapper /usr/local/cpanel/cgi-sys/php5 cannot be accessed: (2)No such file or directory
(Tue Feb 11 12:04:19.697016 2020) (core:alert) (pid 10136)...
Code (marked):

Errors updating to 7.2

search – Trace log – errors

I read many articles but I did not find a solution for my problem.

Our local SP 2019 farm consists of: 2x Application with search, 2x Front-end with distributed cache. Each server has 32 GB of RAM and 8 virtual processors.

The full scan showed me the following errors:

enter the description of the image here

When running a full scan, the load is as follows:
enter the description of the image here

Search application topology:
enter the description of the image here

Error voices such as "Error processing this item" should be related to a high-load CPU / RAM. RAM remains <80%. I have configured Performancelevel for the search service: partly reduced. I increased the timeout value under Farm Search Administration to 600 of 60. The content source contains only one web application. Full crawl takes more than 5 hours for this content source.

Questions:

  1. How do I get rid of these "Processing …" error voices? (Would the possibility be to add another application with the search server?)
  2. "SharePoint returned an empty response": there are files with a size of 0 KB. Is it possible to add a tracking rule to exclude?
  3. "The object was not found": after loading the url in which this error message is displayed, it appears: Page not found. Index reset does not work on this problem.

I will be grateful if you help me solve these problems.

errors – wordpress hypanis.ru vulnerabilities

Does anyone know anything about the "hypanis.ru"wordpress vulnerability?

A client has recently received a WP virus that has caused their website to display "hypanis.ru" before the headings on each page. This results in the following error:

hypanis.ru

Warning: the header information cannot be modified: the headers already sent by (output started in /home/content/html/wp-includes/load.php(1): function created at runtime: 10) in / home / content / html / wp- includes / pluggable.php online 1219

A search for this on Google shows absolutely no posts / information, but more than 29,000 sites affected with the same virus: https://www.google.com/search?q="hypanis.ru "

This seems to be a large number of affected sites, for a virus that does not appear in any WordPress vulnerability list or database? Does anyone have any information on this please?

custom list: Datasheet view does not highlight errors when the required file is missing

There are mandatory columns in my list and when I try to edit the list through the option & # 39; quick edit & # 39; selecting it from the ribbon and if I omit any required field, it is highlighted with errors due to missing the required columns in a red exclamation mark.

On the contrary, when I tried to enter several values ​​in the data sheet view and omitted any value, there was no error due to missing the required columns.

We need a datasheet view so that several people can enter values ​​in the list in bulk and it would be convenient for them if the default view of a list is a datasheet view.

I will create a responsive website for you for $ 20

I will create a responsive website for you

PLEASE MESSAGE BEFORE ORDERING MY GIG.

Hi, I hope you are well! My name is Humayoun,

This is what you get in my web development offer:
1] Design your WordPress site
2] SEO friendly structure
3] Your content added to the website
4] All the pages you need
5] 100% responsive website on all devices
6] Free advice on how to update the website.
7] Image slider / banner on the homepage if you wish.
☆☆☆ Warranty ☆☆☆

✔ Fully insured
✔ Fast loading pages for better conversion and navigability
✔ fully responsive

So, what are you waiting for?

Inbox me if you have any questions or custom design

.

vuejs – Vue ui "Execute Task" does not load and shows no errors

I have a problem since yesterday and I cannot continue with my project with vuejs. When I try to execute the serve it does not load and it is paralyzed without showing errors, because this is due.

I must clarify that I am creating an app that consumes wordpress api locally using port wampserver: 80.
Vuejs uses the port: 8080

enter image description here

plugins: errors when using ajax from an external wordpress page

I am trying to get WordPress user data from an external page within the same domain using ajax. The problem is that I receive errors that contain the user data and the source code of one of my add-ons.

here is my code

external_page.php

user_login => $ info-> user_email);

}
echo json_encode ($ answer);
exit;
}

?>
















here is the code of my plugin

sample_plugin.php

<? php
global_session_value () function
{
echo "
    ";


}


add_action (& # 39; shutdown & # 39 ;, & # 39; global_session_value & # 39;);

?>


I receive an ajax error response message from my external page

                Error message:
// sample_plugin.php code 
   

    {"user1": "demo@mysite.com"} // this is the ajax answer

// sample_plugin.php code   
   

I receive a success alert when I deactivate the plug-in "sample_plugin". How to solve this problem so you can get a successful response without disabling the add-in?

mysql – The Python ThreadPoolExecutor process ends randomly without errors

I am working on a project where I send several threads to a set of threads where each one requests data from Twitter, then transforms them and then loads them into a mysql database.
I am running run_search.py ​​that takes yesterday's date and then runs search.py ​​with that date and a list of csv terms. Each thread is assigned Twitter API credentials and continues to remove search terms from the global terms queue. At the moment I am using 3 wires.

My problem is that the process seems to leave randomly approximately ~ 10 minutes and does not give me any error message, even though I get the results of the thread. My first thought is that I should call wait for each thread, but according to the documents that use the executor structure, I implicitly expect all threads. It never ends in the same place and is never in a place that I have generally seen as problematic. I am honestly perplexed at this point.

I have provided a list of sample terms, at this time I cannot provide credentials for the Twitter API.

Below is my code divided into separate files:

search.py:

import twitter_credentials
sys.path.append(twitter_credentials.PACKAGE_DEST)
import tweepy
from tweepy.error import RateLimitError,TweepError
import search_terms
import pandas as pd
import json
from datetime import datetime
from datetime import timedelta
import time
import threading
import queue
import concurrent.futures as conn
import re
from insertion import load_data
import twitter_credentials as tc
import sqlalchemy as db


def tweet_to_json_string(tweet_list):
    dump = ()
    for tweet in tweet_list:
        dump.append(json.dumps(tweet))
    return dump


def write_tweets_to_file(f_name,tweet_list):
    t = threading.current_thread()
    print('{0}:{1}:write start'.format(t.name,f_name))
    dump =tweet_to_json_string(tweet_list)
    file = open(f_name,mode='w+')
    file.write('(')
    for line in range(0,len(dump)-1):
        file.write(dump(line) + ',n')
    file.write(dump(len(dump)-1)+')')
    print('{0}:{1}: write successful'.format(t.name,f_name))
    file.close()
    return True


def format_file_name(term,count):
    # Format file name  
    stringy = term.replace(' ','_',len(term))
    now = datetime.now()
    date = now.strftime('%d_%m_%Y')
    return stringy + '_' + date +'_'+ str(count) + '.json' 


def get_tweets(tweet_frame,min_id,max_id,search_term,until=None):
        if max_id<0:
            tweets = api.search(q=search_term,min_id=min_id,count=100,result_type='recent',until=until)
        else:
            tweets = api.search(q=search_term,min_id=min_id,count=100,max_id=max_id-1,result_type='recent',until=until)
        for j in tweets:
            tweet_frame.append(j._json)
        return tweets


def authorize_api(consumer_key,consumer_secret,access_token,access_token_secret):
     auth = tweepy.OAuthHandler(consumer_key,consumer_secret)
     auth.set_access_token(access_token,access_token_secret)
     api = tweepy.API(auth)
     return api


def get_min_id(api,term,date):
    d = datetime.strptime(date,'%m/%d/%Y')
    target_date = datetime.strftime(d,'%Y-%m-%d')
    print(target_date)
    w=True
    tweet = None
    while w:
        try:
            tweet = api.search(q=term,count=1,result_type='recent',until=target_date)
            w=False
        except RateLimitError:
            wait_rate_limit(api=api) 
    if tweet is not None:
        json = tweet(0)._json
        print('lowerbound: {0}'.format(json('created_at')))
        return json('id')
    else:
        return None


def wait_rate_limit(api):
    t=threading.current_thread()
    print('{}: rate limit reached: sleep start...'.format(t.name))
    status = api.rate_limit_status()('resources')('search')('/search/tweets')
    wait = abs(time.time() - status('reset'))
    time.sleep(wait)
    print('{}: sleep end'.format(t.name))
    #check rate limit again and then snooze 10sec if not ready
    status = api.rate_limit_status()('resources')('search')('/search/tweets')
    while status('remaining')<1:
        print('snooze 10....')
        time.sleep(10)
        status = api.rate_limit_status()('resources')('search')('/search/tweets')
    return 


def get_all_tweets_on_date(api,lock,term,date,min_id=None,max_id = -1,term_count=0):
    #date in form mm/dd/YYYY
    t= threading.current_thread()
    if min_id is None:
        min_id = get_min_id(api=api,term=term,date=date)
    d = datetime.strptime(date,'%m/%d/%Y')
    d += timedelta(days=1)
    bound_date = datetime.strftime(d,'%Y-%m-%d')
    all_tweets = ()
    error = False
    error2 = False
    done = False
    i = 0
    ld = 0

    while not done:
        retry = 3
        try:
            #get tweets
            if max_id<0:
                tweets = api.search(q=term, min_id=min_id, count=100,
                                    result_type='recent', until=bound_date, tweet_mode='extended')
            else:
                tweets = api.search(q=term, min_id=min_id, count=100, max_id=max_id-1, result_type='recent',
                                    until=bound_date, tweet_mode='extended')
            for j in tweets:
                all_tweets.append(j._json)
            max_id = tweets(-1).id
            error = False
            error2 = False
            print(t.name + ':'+ term + ':' + str(i))
            ld+=1
            if ld == 10:
                js_tweets = pd.Series(all_tweets)
                load_data(js_tweets,lock)
                ld = 0
                all_tweets = ()
            if len(tweets) < 100:
                print(t.name + ':'+term+': '+"All tweets found")
                done = True
                continue
        except RateLimitError:
            print(t.name + ': rate limiting')
            if api.last_response.status_code not in (420,429):
                print(t.name,'unknown error: ',api.last_response.status_code)
                quit(1)
            #if non stored tweets exist write to file
            if len(all_tweets) > 0:
                # while retry > 0:
                    try:
                        js_tweets = pd.Series(all_tweets)
                        load_data(js_tweets, lock)
                        all_tweets = ()
                        retry = 0
                    except ConnectionError:
                        print(threading.current_thread(),'Database Error: retrying...')
                    finally:
                        retry -= 1
                        term_count += 1
            #check rate limit status and wait till reset time
            print(t.name,'rl: tweets stored')
            wait_rate_limit(api)
        except TweepError:
            print(t.name + ': Tweep Error: ' + api.last_response.status_code)
            if len(all_tweets) > 0:
                # while retry > 0:
                    try:
                        js_tweets = pd.Series(all_tweets)
                        load_data(js_tweets, lock)
                        all_tweets = ()
                        retry = 0
                    except ConnectionError:
                        print(threading.current_thread(),'Database Error: retrying...')
                    finally:
                        retry -= 1
                        term_count += 1
            if error2:
                print('Consecutive Errors: Search aborting')
                done = True
            elif error:
                error2 = True
            # lock1.acquire()
            # terms.put(term,False)
            # min_ids.put(min_id,False)
            # max_ids.put(max_id,False)
            # term_counts.put(term_count,False)
            # lock1.release()
        i+=1
    print(t.name,'dumping final tweets')
    if len(all_tweets) > 0:
        # while retry > 0:
            try:
                js_tweets = pd.Series(all_tweets)
                print(t.name,'exit: json')
                load_data(js_tweets, lock)
                print(t.name,'exit:dump')
                all_tweets = ()
                retry = 0
            except ConnectionError:
                print(threading.current_thread(),'Database Error: retrying...')
            finally:
                term_count += 1
                retry -= 1
    print(t.name,'returning to term queue...')
    return max_id


def threading_twitter(api, date, delta,lock_1, lock_2, lock_3):
    time.sleep(int(delta)*30)
    t = threading.current_thread()
    lock_1.acquire()
    qs = terms.qsize()
    lock_1.release()
    print(t.name + ': enter')
    while terms.qsize() > 0:
        print(t.name, 'new_term')
        lock_1.acquire()
        term = terms.get(False)
        min_id = min_ids.get(False)
        max_id = max_ids.get(False)
        term_count = term_counts.get(False)
        lock_1.release()
        final_id = get_all_tweets_on_date(api,lock_3,term,date,min_id=min_id,max_id=max_id,term_count=term_count)
        print("{0} {1} Done".format(t.name,term))
        lock_2.acquire()
        final_ids(term) = final_id
        lock_2.release()
        lock_1.acquire()
        qs = terms.qsize()
        lock_1.release()
    print(t.name + ': end')


if __name__ == "__main__":
    #grab seatch terms from file 
    line_args = sys.argv
    if not (len(line_args)>=3):
        print('Input must be in form:  search.py "*.csv" "MM/DD/YYYY"')
        exit()
    file_name = line_args(1)
    Date = line_args(2)
    if re.match('(0-1)(0-9)(/)(0-3)(0-9)(/)(2)(0-9)(0-9)(0-9)', Date) is None:
        print(Date)
        print('Incorrect Date Formatn')
        print(Date)
        raise TypeError
    elif re.match('S*.csv', file_name)==None:
        print('Incorrect Filename format Format')
        raise TypeError
    terms_frame = pd.read_csv(line_args(1),na_values='None',comment='#')


    #Define search args
    global terms
    terms = queue.Queue()
    for term in terms_frame('term'):
        terms.put(term,False)    

    global min_ids
    min_ids = queue.Queue()
    for id in terms_frame('last_id'):
        min_ids.put(id,False)

    global max_ids
    max_ids = queue.Queue()
    for i in range(0,len(terms_frame('term'))):
        max_ids.put(-1,False)

    global term_counts
    term_counts = queue.Queue()
    for i in range(0,len(terms_frame('term'))):
        term_counts.put(10,False)

    global final_ids
    final_ids = {}



    #Define arg locks for multithreading
    lock1 = threading.Lock()
    lock2 = threading.Lock()
    lock_db = threading.Lock()

    #Create API list and make connections for each set of keys in twitter_credentials.py
    apis = ()
    len_keys = min(len(twitter_credentials.ACCESS_TOKEN_SECRET),len(twitter_credentials.ACCESS_TOKEN),
                len(twitter_credentials.CONSUMER_SECRET),len(twitter_credentials.CONSUMER_KEY))

    for i in range(0,len_keys):
        auth = tweepy.OAuthHandler(twitter_credentials.CONSUMER_KEY(i), twitter_credentials.CONSUMER_SECRET(i))
        auth.set_access_token(twitter_credentials.ACCESS_TOKEN(i),twitter_credentials.ACCESS_TOKEN_SECRET(i))
        api = tweepy.API(auth)
        apis.append(api)

    threads = ()
    with conn.ThreadPoolExecutor(max_workers=3) as executor:
        for i in range(0,len(apis)):
            threads.append(executor.submit(threading_twitter,apis(i), line_args(2), i, lock1, lock2, lock_db))
        for i in threads:
            i.result()

insertion.py:

import sqlalchemy as db
import pandas as pd
from pandas.io.json import json_normalize
from sqlalchemy.dialects.mysql import insert
import twitter_credentials as tc
import numpy as np
from json import load, loads
from datetime import datetime
import threading


def tprint(*args):
    print(threading.current_thread().name, *args)
    return

def extract_hashtag_text(list):
    tags = ()
    i=0
    max = 0
    for hash in list:
        tags.append(hash('text'))
        i+=1
        if i > max: max = i
    while len(tags) < max:
        tags.append(None)
    return tags


def extract_mentions_text(list):
    tags = ()
    i=0
    max = 0
    for hash in list:
        tags.append(hash('screen_name'))
        i+=1
        if i > max: max = i
    while len(tags) < max:
        tags.append(None)
    return tags


def extract_retweets(df, lock):
    try:
        lock.acquire()
        engine = db.create_engine('mysql+mysqlconnector://{0}:{1}@{2}'.format(tc.DATABASE_USERS(0),
                                                        tc.DATABASE_PASSWORDS(0), tc.DATABASE_DESTINATION))
        connection = engine.connect()
        metadata = db.MetaData()
        metadata.reflect(bind=connection)
        tweets = metadata.tables('tweets')
        res = tweets.select(bind=connection).with_only_columns((tweets.c.id_str, tweets.c.retweeted_status_id_str)).execute()
        twid = res.fetchall()
        connection.close()
        lock.release()
    except:
        connection.close()
        lock.release()
        raise ConnectionError
    ids = pd.DataFrame(twid,columns=('id_str','retweeted_status_id_str'))
    tweet_captured = df('retweeted_status_id_str').isin(ids('id_str'))
    retweet_captured = df('retweeted_status_id_str').isin(ids('retweeted_status_id_str'))
    is_repeat = np.logical_not(df('id_str').isin(ids('id_str')))
    retweet_in_df = df('retweeted_status_id_str').isin(df('id_str'))
    to_retweet = np.logical_or(tweet_captured,retweet_captured)
    to_retweet = np.logical_or(to_retweet,retweet_in_df)
    tprint('retweets: '+str(sum(to_retweet)))
    to_retweet = np.logical_and(to_retweet,is_repeat)
    tprint('retweets after repeats: ' + str(sum(to_retweet)))
    return to_retweet

# Here is the section that includes inserting various objects into the SQL database.
# I'm including the code for a few different methods in case you want to try a slightly different approach than I used.

# Insert #3:  Inserting sample tweet data from a JSON

# I couldn't figure out how to do this directly with the nested structure,
# so I went the dataframe route even though it's massive inefficient.


def load_data(json,db_lock):
    if type(json) == str:
        tprint("open file: {0}".format(json))
        tweets_json = load(open(json))
    else:
        tweets_json = json
    tweets_df = json_normalize(tweets_json)
    tprint('normalized tweets')
    # creating a dataframe with only the tweet columns we want to keep
    if 'full_text' in tweets_df.columns:
        tweets_df = tweets_df.rename(columns={'full_text': 'text'})

    rename = {}
    for col in tweets_df.columns:
        if '.' in col:
            clr = col.replace('.', '_')
            if clr == 'quoted_status_id_str': continue
            # tprint('{0}:{1}'.format(col,clr))
            rename(col) = clr
    tweets_df = tweets_df.rename(columns=rename)
    tweet_cols = ('id_str','created_at','favorite_count', 'retweet_count','truncated', 'source', 'text',
                  'is_quote_status', 'quoted_status_id_str','quoted_status_user_id_str','retweeted_status_id_str',
                  'retweeted_status_user_id_str', 'in_reply_to_status_id_str', 'in_reply_to_user_id_str',
                  'user_followers_count','user_friends_count','user_listed_count','user_id_str')

    use_set = list(set.intersection(set(tweets_df.columns),set(tweet_cols)))
    try:
        tweets_table = tweets_df(use_set)
    except KeyError:
        raise KeyError
    dates = (datetime.strptime(x,"%a %b %d %H:%M:%S +0000 %Y") for x in tweets_table('created_at'))
    str_dates = (datetime.strftime(x,'%Y-%m-%d %H:%M:%S') for x in dates)
    tweets_table('created_at') = str_dates

    retweet_ids = extract_retweets(tweets_table,db_lock)
    retweet_frame = tweets_table(retweet_ids)
    retweet_frame = retweet_frame(('id_str', 'created_at', 'retweet_count', 'user_id_str', 'retweeted_status_id_str',
                                   'retweeted_status_user_id_str'))
    rtlen = retweet_frame.shape(0)
    final_json_retweet = retweet_frame.to_json(orient="records")
    final_json_retweet = loads(final_json_retweet)

    tweets_table = tweets_table(-retweet_ids)
    final_json = tweets_table.to_json(orient="records")  # converting the dataframe back into a JSON
    final_json = loads(final_json)  # this step is necessary to convert it into a proper python dictionary
    tprint('tweets formatted')

    # creating a dataframe for only the user categories we want to keep
    user_table = tweets_df(('user_id_str','user_created_at','user_default_profile','user_default_profile_image','user_name','user_profile_use_background_image','user_protected','user_screen_name','user_verified'))
    final_json_u = user_table.to_json(orient="records")  # converting the dataframe back into a JSON
    final_json_u = loads(final_json_u)  # this step is necessary to convert it into a proper python dictionary
    tprint('users formatted')

    # Entities
    # hashtags
    tags = pd.DataFrame((extract_hashtag_text(x) for x in tweets_df('entities_hashtags')))
    tags('id_str') = tweets_df('id_str')
    cols = 0
    for i in tags.columns:
        if type(i) == int:
            if i > cols: cols = i
    cols += 1
    hash_list = ()
    for i in range(0,cols):
        hash_list.append(pd.DataFrame({'id_str':tags('id_str'),'entity':tags(i)}))
    ent_frame = pd.concat(hash_list,axis=0)
    ent_frame = ent_frame.dropna(axis=0,how='any')
    ent_frame('type') = ('hashtag') * len(ent_frame('id_str'))
    # getting it in the right format
    final_json_hash = ent_frame.to_json(orient="records")
    final_json_hash = loads(final_json_hash) # this step is necessary to convert it into a proper python dictionary
    tprint('hashtags formatted')

    # mentions
    mentions = pd.DataFrame((extract_mentions_text(x) for x in tweets_df('entities_user_mentions')))
    mentions('id_str') = tweets_df('id_str')
    cols = 0
    for i in mentions.columns:
        if type(i) == int:
            if i > cols: cols = i
    cols += 1
    ment_list = ()
    for i in range(0,cols):
        ment_list.append(pd.DataFrame({'id_str':mentions('id_str'),'entity':mentions(i)}))
    ment_frame = pd.concat(ment_list,axis=0)
    ment_frame = ment_frame.dropna(axis=0,how='any')
    ment_frame('type') = ('mention') * len(ment_frame('id_str'))

    final_json_ment = ment_frame.to_json(orient="records")  # converting the dataframe back into a JSON
    final_json_ment = loads(final_json_ment)  # this step is necessary to convert it into a proper python dictionary
    tprint('mentions formatted')
    try:
        db_lock.acquire()
        engine = db.create_engine(
            'mysql+mysqlconnector://{0}:{1}@{2}'.format(tc.DATABASE_USERS(0),
                                                        tc.DATABASE_PASSWORDS(0),tc.DATABASE_DESTINATION))
        # you will need to update this line to include your username password, and local port
        connection = engine.connect()
        metadata = db.MetaData()
        metadata.reflect(bind=connection)
        tprint('connection established')
        users = metadata.tables('users')
        entities = metadata.tables('entities')
        tweets = metadata.tables('tweets')
        retweets = metadata.tables('retweets')

        tprint('begin insert 0/5')
        # inserting the tweets dictionary into the database
        ins1 = insert(tweets).values(final_json). 
            prefix_with('IGNORE')
        result1 = connection.execute(ins1)
        tprint('tweet success 1/5')
        # inserting the users dictionary into the database
        ins2 = insert(users).values(final_json_u). 
            prefix_with("IGNORE")
        result2 = connection.execute(ins2)
        tprint('user success 2/5')
        ins3 = insert(entities).values(final_json_hash). 
            prefix_with("IGNORE")
        result3 = connection.execute(ins3)
        tprint('hashtag success 3/5')
        ins4 = insert(entities).values(final_json_ment). 
            prefix_with("IGNORE")
        result4 = connection.execute(ins4)
        tprint('mention success 4/5')
        if rtlen>0:
            ins5 = insert(retweets).values(final_json_retweet). 
                prefix_with("IGNORE")
            result5 = connection.execute(ins5)
        else:
            tprint('no retweets to record...')
        tprint('retweets success 5/5')
        connection.close()
        db_lock.release()
    except:
        connection.close()
        db_lock.release()
        raise ConnectionError
    tprint("success!! returning to search....")
    return

run_search.py:

from datetime import date, timedelta
import sys
import os

if __name__ == '__main__':
    # run search.py with terms.csv on yesterday
    yesterday = date.today() - timedelta(days=1)
    str_date = yesterday.strftime(format='%m/%d/%Y')
    command = 'python search.py "terms.csv" "{0}"'.format(str_date)
    print('begin search for {0}'.format(str_date))
    os.system(command)

terms.csv

,term,last_id
,AndrewYang,
,JohnDelaney,
,PeteButtigieg,
,TulsiGabbard,
,JulianCastro,
,SenBennetCO,
,SenatorBennet,
,MichaelBennet,
,JoeBiden,
,MikeBloomberg,
,CoryBooker,
,SenBooker,
,KamalaHarris,
,amyklobuchar,
,BetoORourke,
,BernieSanders,
,SenSanders,
,TomSteyer,

side note: At the moment, this is from a terminal inside pycharm on my OSX Mojave laptop
He apologizes because the code is a bit complicated right now.

Is there a managed host that can help? (Are redirection errors in .htaccess a likely culprit?)

Below is a copy of a recent support request from my "Managed Hosting" provider. The most recent one is at the top and I deleted the names for privacy, so start at the bottom! Is there a host that can help with this type of problem or do I need to hire an independent expert from .htaccess and Apache? If so, can you recommend someone to help you? (Accommodation provider or independent professional). Thank you very much for any help or advice!

01/31/2020 (21:46)
Personal
I see, I suggest removing all redirects then. Just redirect should have to redirect everything to www or not www and nothing else

01/31/2020 (21:40)
Client
Hello again! In my first email, I gave some examples of incorrect redirection that the server is doing. It is being very liberal with redirects that it shouldn't be doing. My Google console shows several of these types of redirects, where the server is cutting the URL and redirecting the URL to a URL of your choice. The worst I found was the first example. Many of my pages start with mydomain.com/string. How did you decide to redirect that page?

When this exact url is written in the browser, it actually goes to the wrong page. It should be a 404 since there is no such page on my website.

Something in the .htaccess or apache files is doing some kind of incorrect string match or something, but I'm not expert enough to understand the syntax of the .htaccess / apache file. I tried to check the .htaccess file, but it's like trying to decipher a foreign language for the most part.

I noticed some strange editions of Cpanel at the bottom of .htaccess. Not sure who or what put them there. I figured that one of you did it as part of the managed hosting. Could that be the culprit? He says not to delete them in the comments.

Most entries are from a WordPress security plug-in. And there are 301 redirects that I put there after migrating my html website to WordPress.

Phew! Sorry for the long post. I hope it makes sense.

Thank you!

01/31/2020 (21:23)
Personal
What redirection do you need help with exactly?

We will configure the redirects as you wish, just let us know

But I have no idea why google deindex

01/31/2020 (20:49)
Personal
Hi,

Let me take a look at the redirection problem and I will answer you soon.

01/31/2020 (20:46)
Client
Are redirection errors not a hosting problem? Hahaha If you can't help and I don't understand Apache and .htaccess, I understand. I do not understand it either. But I thought a managed host would provide this kind of assistance?

01/31/2020 (20:43)
Personal
Hi,

Note that this is not a problem of accommodation. You should verify it with an SEO expert so that your website is indexed at the top while searching on Google.

01/31/2020 (20:15)
Client
I hoped someone there could take a look at the .htaccess and apache files and see if there is anything unusual. Or the support here is not an expert in those areas? I know that Google indexing is a hidden algorithm, but I still have this redirection problem. Any help with that?

I thought "Managed hosting" included this kind of help. I was wrong?

Thank you very much for any response! (Even if it is, "I'm sorry, we don't understand very well the .htaccess and Apache files." LOL!)

Have a nice day. https://www.webhostingtalk.com/

01/31/2020 (20:05)
Personal
Hi,

Unfortunately that is beyond our control. We do not know the rules of google.

01/31/2020 (19:59)
Client
Hi!

I hope you can help me. My website was indexed by Google and I have been trying to reduce the reason. I think it could be related to some redirection problems of my .htaccess? I've been checking the website, thinking it was a mistake on my part, duplicate content, etc. However, I am not sure. But I noticed that I receive some strange redirection / exclusion notices in my Google webmaster console like the following:

mydomain.com/string

For some reason, this takes the user to the next page:

mydomain.com/string-string2-string3/

Apparently, there are many similar redirects on the website. And I'm not sure what is causing it, unless it is somehow in the .htaccess file or in the apache files.

Several redirects only have the last .html cut. Everyone mostly goes to the right page.

Many of the excluded and redirected pages are correct, but many make no sense. Many of the redirects were placed there when I migrated my HTML pages to WordPress.

Any idea what is causing this rarity? I hope you can help!

Thank you!