elliptic pde – RBF-FD laplacian solver on python

I am trying to find a laplacian of the function explicitly using the RBF-FD approach The function is sin(pi*x)*cos(pi*z/2) and its analytical solution of the Laplacian is -5/4*(pi^2)*sin(pi*x)*cos(pi*z/2)
The parameters are:

nx = 101         
nz = nx
dx = 0.01
dz = dx
x    = np.linspace(0, 1, nx)
z    = np.linspace(0, 1, nz)
d2px = np.zeros((nz, nx)) 
d2pz = np.zeros((nz, nx))
X, Z = np.meshgrid(x, z)
points = np.column_stack((X.ravel(), Z.ravel()))
stencil = 5
size = len(points)
def func(x,z):
    return np.sin(np.pi*x)*np.cos(np.pi*z/2)
f = func(points(:, 0), points(:, 1)).reshape(nx, nz)
def laplace_abs(x, z):
    return -5/4*(np.pi**2)*np.sin(np.pi*x)*np.cos(np.pi*z/2)
laplace_abs = laplace_abs(points(:, 0), points(:, 1)).reshape(nx, nz)

Then, I am trying to find RBF-FD coefficients for the 5-points-stencil like in FDM. After finding coefficients the Laplacian value should be represented as a weighted sum. The relevant formulas are represented here:
enter image description here

The main part of code is:

def get_fd_indices():
    nbrs = NearestNeighbors(n_neighbors=stencil, algorithm='ball_tree', metric='euclidean').fit(points)
    return nbrs.kneighbors(points)
distances, indices = get_fd_indices()
r = distances
RHS = np.where(distances != 0, r**2*(16*np.log(r)+8), 0)
def LHS(x, xi):
    R = np.zeros((size,stencil,stencil))
    for i in range(size):
        R(i) = distance.cdist(x(i), x(i), 'euclidean')
    LHS = np.where(R != 0, R**4*np.log(R), 0)
    return LHS
LHS = LHS(points(indices), points(indices))
def get_coef(LHS, RHS):
    c = np.zeros((size, stencil, 1))
    for i in range(size):
        Left = LHS(i)
        Right = RHS(i).reshape((stencil,1))
        c(i) = LA.pinv(Left).dot(Right)
    return c
c = get_coef(LHS, RHS)
values = f.ravel()
laplaceRBF = np.zeros(size) 
for i in range(size):
    index = indices(i)
    laplaceRBF(i) = np.sum(values(index)*c(i))
laplaceRBF = laplaceRBF.reshape(nx, nz)

I expect to obtain results similar to the FDM solution

def FDM(f, dx):
    for i in range(1, nx - 1):
        d2px(i, :) = (f(i - 1, :) - 2 * f(i, :) + f(i + 1, :)) / dx ** 2 
    for j in range(1, nz - 1):
        d2pz(:, j) = (f(:, j - 1) - 2 * f(:, j) + f(:, j + 1)) / dz ** 2 
    return d2px+d2pz
FDM = FDM(f, dx)

Right now the results of the RBF-FD are totally different from an FDM. Some error in RBF-FD realization. Please, tell me, what’s wrong.

python – Minimum moves to ensure that each element X in an array occurs X times

I would like to brainstorm/get some advice/tips regarding the following question.
Given an array,you can either insert elements or you can delete elements from it.Note the insertion/deletion must be done in such a way so that in the end,for any element X in the array,X occurs X times.
The objective is to find the minimum no of moves you can make to achieve the desired result.
For example if A is (1,1,3,4,4,4).Then we can delete one occurence of 1 and 3 respectively and add a 4 to give us (1,4,4,4,4).In the final array 1 occurs 1 time and 4 occurs 4 times.Answer is 3
Again if A is (10,10,10).Then you can simply remove all 10s to get 0.So the answer here is 3.
One way of doing is to have a map that keeps track of how many times each value occurs.After this I tried using the approach mentioned in the below link(but it only handles deletions and not insertion)
So I would like some tips on how to approach this problem while taking care of both insertion and deletions.
”’
unordered_map<int, int> map;

// Store frequency of each element
for (int i = 0; i < n; i++)
    map(a(i))++;

// To store the minimum deletions required
int ans = 0;

for (auto i : map) {

    // Value
    int x = i.first;

    // It's frequency
    int frequency = i.second;

    // If number less than or equal
    // to it's frequency
    if (x <= frequency) {

        // Delete extra occurrences
        ans += (frequency - x);
    }

    // Delete every occurrence of x
    else
        ans += frequency;
}

return ans;

”’
I understand that this is the wrong stackexchange forum to ask this question,so i will make sure this doesn’t happen in future.

Ingreso a FTP Y descarga en PYTHON

Tengo que descargar un archivo y borrarlo después de descargarlo de un FTP con python, el problema se suscita ya que el archivo que tengo cambia de nombre en todas las instancias, es decir renueva el nombre cada un cierto tiempo.
el código de error es 550 failed to open. estoy enfrascado en esto para realizar la tarea que necesito por favor alguien que pueda ayudarme.

import ftplib
from io import FileIO
import os


ftp = ftplib.FTP("xxxx")
ftp.login("xxx", "xxxx")
ftp.cwd("/files")
filename = (ftp.dir())
print(filename)
ftp.retrbinary('RETR '+ "filename"), open(filename, 'rb')

python – How to append strings to Pandas Index

I have the following Pandas data frame:

import pandas as pd
df = pd.DataFrame({'d': (1, 2, 3)}, index=('FOO', 'BAR', 'BAZ'))
df
        d
FOO     1
BAR     2
BAZ     3

What I want to do is to append a string in front of the index. Yielding:

           d
xy.FOO     1
xy.BAR     2
xy.BAZ     3

How can I do that?

python – How are you sure if you are developing something with efficient or ‘clean’ code?

I have been coding in python for a little over a year, and I have learned a lot and developed quite a few applications, in the process.

I do not program for my profession, I simply program recreationally. With that said, I am not exposed to new programming techniques/data structures etc., that I would be learning if my day job was in the field, for example.

I have become quite good at figuring out what I want to do by trial and error in python, and I am usually pretty successful at figuring it out!

However, sometimes when I learn something new, I will find that I was doing it the long way or with way too much code that could be much more easily accomplished with fewer lines or a technique that makes an algorithm more efficient.

When you are developing software, do you strive to find the most efficient way to do something first, or do you simply code the way you are familiar?

I don’t have many programmer friends, so I have been doing this all pretty much on my own.

I watch a few twitch streams, but beside that I do not really know anyone in person.

Hopefully that adds some context why I am asking.

python – Data API with Influx-DB and FastAPI

I’m fairly new to time series databases in general and Influx in particular.
My objetive is to build a simple and general use API that will allow me to write and read data from an Influx Database.
I used Influx on a previous project and I found that building the API for the database was pretty mechanical, so I decided to create a general API.

I divided my code in 3 modules:

  • Main.py where I will run FastAPI
  • Database.py where I connect with the database and I set the writing and reading actions.
  • Model.py where I create the pydantic model for the API.

Main.py

from fastapi import FastAPI
import uvicorn
import yaml
from yaml.loader import SafeLoader
from Model.Model import WritingData, ReadingData
from Database.Database import InfluxDataBase



with open("config.yaml", "r") as ymlfile:
    cfg = yaml.load(ymlfile,Loader=SafeLoader)
server_URL=cfg("InfluxDB")("server_URL")
token=cfg("InfluxDB")("token")
org=cfg("InfluxDB")("org")

Influx = InfluxDataBase(server_URL,token,org)
app = FastAPI()

@app.post('/write/')
async def call_writing_influx(data: WritingData):
    Influx.write_data(data)

@app.post('/read/')
async def call_reading_influx(data: ReadingData):
    return Influx.read_data(data)
    
if __name__ == "__main__":
    uvicorn.run("main:app", host="127.0.0.1", port=5000, reload=True)

Database

from influxdb_client import InfluxDBClient, Point
from influxdb_client.client.write_api import SYNCHRONOUS
import json

class InfluxDataBase:
    
    def __init__(self,server_URL,token,org) -> None:
        self.client=InfluxDBClient(server_URL, token=token, org=org)
        self.write_api=self.client.write_api(write_options=SYNCHRONOUS)
        self.query_api=self.client.query_api()
        self.server_URL=server_URL
        self.token=token
        self.org=org

    def write_data(self,data) -> None:
        executable_code='Point(data.measurement)'
        n_fields=len(data.field)
        n_tag=len(data.tag)
        for i in range(n_tag):
            executable_code=executable_code+'.tag(list(data.tag.keys())({}),list(data.tag.values())({}))'.format(i,i)
        for i in range(n_fields):
            executable_code=executable_code+'.field(list(data.field.keys())({}),list(data.field.values())({}))'.format(i,i)
        if data.timestamp is not None:
            executable_code=executable_code+'.time(data.timestamp)'

        Data=eval(executable_code)
        self.write_api.write(bucket=data.bucket_name, record=Data)
    
    def read_data(self,data):
        query = f'''
        from(bucket: "{data.bucket_name}")'''+ ''' 
        |> range(start: -{}h, stop: now())'''.format(data.time_interval)+f'''
        |> filter(fn:(r) => r("_measurement") == "{data.measureament_name}")'''
        for i in range(len(data.tag)):
            query=query+f'''|> filter(fn:(r) => r("{list(data.tag.keys())(i)}") == "{list(data.tag.values())(i)}") '''
        for i in range(len(data.field)):
            query=query+f'''|> filter(fn:(r) => r("_field") == "{data.field(i)}") '''
        result = self.query_api.query(org=self.org, query=query) 
        results={}
        for table in result:
            for record in table.records:
                results(record.get_field())=record.get_value()
        json_result=json.dumps(results)
        return json_result

Model.py

from pydantic import BaseModel
from typing import List,Dict,Optional
from datetime import datetime

class WritingData(BaseModel):
    bucket_name: str
    measurement: str
    tag: Dict(str,str)
    field: Dict(str,float)
    timestamp: Optional(datetime) = None

class ReadingData(BaseModel):
    bucket_name: str
    time_interval: int
    measureament_name: str
    tag: Dict(str,str)
    field: List(str)

My current code works fine, It does what is intended to do.

There is any good practice politics I’m breaking?

Do you have any suggestions to imptove it?

Como criar mensagens automáticas pelo bot do discord com atualizações constantes? (api python)

import discord
import TOKEN_value
import memes

client = discord.Client()
TOKEN = TOKEN_value.token()

memes_ON = False


@client.event
async def on_ready():
    print('BOT HAS BEEN CONNECTED.')
    print(client.user.name)
    print(client.user.id)


@client.event
async def on_message(message):
    global memes_ON
    if message.content.lower().startswith('!memes'):
        memes_ON = True
        await message.channel.send('Now I gonna send memes for you!')

    if memes_ON:
        meme = memes.memes_search()
        if meme != 'no memes here':
            await message.channel.send(meme)


client.run(TOKEN)

A função memes_search() me retorna um link de algum meme do imgur, mas nunca é um link repetido, ou seja, nem sempre tem algo novo para mandar, e nesse caso ele retorna a string ‘no memes here’.

A função token() apenas me retorna meu token

O que eu gostaria de fazer é que toda vez que tivesse algum link novo ele fosse mandado para o chat automaticamente no local que foi digitado !memes, porém ele para de rodar caso não haja novos links e só volta caso alguém mande mensagem no servidor. A única forma que fez ele funcionar foi mandar mensagens infinitamente mesmo que não sejam links, caso contrário ele para de rodar. já tentei várias coisas e nenhuma funciona. O que preciso é que a função não pare de rodar e continue fazendo a verificação “if memes_ON”, já tentei printar alguns valores e vi que a partir do momento que não é devolvido um link ele roda a função duas vezes e para até ter novas mensagens no server.

python – Ordenamiento de dataframe

Tengo un dataframe y quisiera modificar el orden para poder procesarlo.

    import pandas as pd
    import numpy as np
    import pandas_datareader as pdr
    import yfinance as yf

    ticker = ('SPY', 'MCD', 'AAPL', 'ET', 'AMZN')
    df = yf.download(ticker,start="2003-01-01", end="2020-10-15").dropna()
    dt=df('Close').T

resultado:
Date 2006-02-03 2006-02-06 …
AAPL 2.566071 2.403571
AMZN 38.330002 37.950001
ET 5.662500 5.690000
….

Quisiera obtener el siguiente orden:
Date ticker value
2006-02-03 AAPL 2.566071
2006-02-03 AMZN 38.330002
2006-02-03 ET 5.662500
2006-02-06 AAPL 2.403571
2006-02-06 AMZN 37.950001
2006-02-06 ET 5.690000

intente con stack, pivot_tables llevandolo a lista y despues a series pero no logre realizarlo.

Finding programmers that use python in a dataframe, Pandas-Jupyter

I have a dataframe which contains information of programmers like: country, programming languages. etc:

COUNTRY    PROGRAMMING_LANGUAGE
usa         javascript
uk          python;swift;kotlin
india       python;ruby
usa         c++;c;assembly;python
canada      java;php;golang;ruby
angola      python;c#
india       c;java
brazil      javascript;php
canada      php;sql
india       c#;java
brazil      java;javascript
russia      java;kotlin
china       javascript
usa         python;c;c++
india       ruby
australia   javascrit
india       php;java
china       swift;kotlin
russia      php;sql
brazil      firebase;kotlin
uk          sql;firebase
canada      python;c
portugal    python;php

My program should display on a dataframe:

  • All countries;
  • How many people from each country use python;
COUNTRY   KNOWS_PYTHON
 
usa         2
uk          1
india       1
angola      1
canada      1
portugal    1
russia      0
brazil      0
australia   0
china       0

Please share your opinion about my algorithm, in any possible way to improve it:

import pandas as pd
import numpy as np
pd.set_option('display.max_columns',100)
pd.set_option('display.max_rows',100)
df = pd.DataFrame({
"PROGRAMMER":np.arange(0,25),
"AGE":np.array((22,30,np.nan,25,19,27,28,26,33,18,14,np.nan,29,35,19,30,29,24,21,52,np.nan,24,np.nan,18,25),dtype=np.float16),
"COUNTRY":('uSa','Uk','india','usa','Canada','AngOla','India','braZil','canada','india','brazil','russia','china','usa','india',np.nan,'Australia','india','China','russia','brazil','uk','canada','portugal','ChiNa'),
"PROGRAMMING_LANGUAGE":('JAVASCRIPT','python;swift;kotlin','python;ruby','c++;c;assembly;python','java;php;golang;ruby','python;c#','c;java','javascript;php','php;sql','c#;java','java;javascript','java;kotlin','javascript','python;c;c++','ruby',np.nan,'javascrit','php;java','swift;kotlin','php;sql','firebase;kotlin','sql;firebase','python;C','python;php',np.nan),
"GENDER":('male','female','male','male','female','female',np.nan,'male','female','male','male','female','female',np.nan,'female','male','male','male','female','male',np.nan,'male','female','male','male'),
"LED_ZEPPELIN_FAN":('yes','YES','yes','yes','yes','yes','yes','yes','yes','yes','yes','yes','yes',np.nan,'yes','yes','yes','yes','yes','yes','yes','yes','yes','yes','yes'),
})
#Replacing NaN value as 'missing'

df = df.fillna("missing")
filt = (df('COUNTRY') != "missing") & (df('PROGRAMMING_LANGUAGE') != "missing")
table = df.loc(filt,('COUNTRY','PROGRAMMING_LANGUAGE'))
table = table.applymap(str.lower)
table
#This is just a list with all countries(without duplicates), and it will be used later

total_countries = list(set(table('COUNTRY')))
#Filter rows that contain python as programming language

filt = table('PROGRAMMING_LANGUAGE').str.contains('python',na=False)
table_python = table.loc(filt,('COUNTRY','PROGRAMMING_LANGUAGE'))
#Getting all countries that have programmers that use python(without duplicates)

countries = table_python('COUNTRY').value_counts().index.tolist()
#Getting the number of programmers from each country that use python(including duplicates from each country)

quantities = ()
for i in range(0,len(countries)):
    quantities.append(table_python('COUNTRY').value_counts()(i))
#Comparing the list that contains all countries, with a list of countries that use python.
#If there is a country that doesn't have programmers that use python, these will be added to final with 0 as one of the values

for i in total_countries:
    if i not in countries:
        countries.append(i)
        quantities.append(0)
table_python = pd.DataFrame({"COUNTRY":countries,"KNOWS_PYTHON":quantities})
table_python.set_index('COUNTRY',inplace=True)
table_python

python – Vectorized code to find the position and length of runs of 1s in a bit matrix

I’m trying to write something like an adjusted run-length encoding for row-wise bit matrices. Specifically I want to “collapse” runs of 1s into the number of 1s in that run while maintaining the same number of 0s as original (right-padding as necessary). For example:

input_v = np.array((
  (0, 1, 0, 0, 1, 1, 0, 1),
  (0, 0, 1, 0, 1, 1, 1, 0)
))


expected_v = np.array((
  (0, 1, 0, 0, 2, 0, 1),
  (0, 0, 1, 0, 3, 0, 0)
))

My current attempt works after padding, but is slow:

def count_neighboring_ones(l):
    o = 0
    for i in l:
        if i == 0:
            return o
        o += 1

def f(l):
    out = ()
    i = 0
    while i < len(l):
         c = count_neighboring_ones(l(i:))
         out.append(c)
         i += (c or 1)
    return out

Are there some vectorization techniques I can use to operate on the entire matrix to reduce row-wise operations and post-padding?