python – How can I optimize my Von Neumann neighborhood algorithm?

I am working on a small project that requires finding Von Neumann neighborhoods in a matrix. Basically, whenever there is a positive value in the array, I want to get the neighborhood for that value. I used an iterative approach because I don’t think there’s a way to do this in less than O(n) time as you have to check every single index to make sure all of the positive values are found. My solution is this:

is_counted = {}


def populate_dict(grid, n):
rows = len(grid)
cols = len(grid(0))
for i in range(rows):
    for j in range(cols):
        if grid(i)(j) > 0:
            find_von_neumann(grid, i, j, n, rows, cols)

return len(is_counted.keys())


def find_von_neumann(grid, curr_i, curr_j, n, rows, cols):
#print(curr_i, curr_j)
if n == 0:

    cell = grid(curr_i)(curr_j)

    if cell > 0:
        key = str(curr_i) + str(curr_j)
        if key not in is_counted:
            is_counted(key) = 1



if n >= 1:

coord_list = ()
# Use Min so that if N > col/row, those values outside of the grid will not be computed.
for i in range(curr_i + min((-n), rows), curr_i + min((n + 1), rows)):
    for j in range(curr_j + min((-n), cols), min(curr_j + (n + 1), cols)):
        dist = abs((curr_i - i)) + abs((curr_j - j))

        if n >= dist >= -n and i >= 0 and j >= 0 and i < rows and j < cols:

            coord_list.append((i, j))

for coord in coord_list:
        key = str(coord(0)) + str(coord(1))
        if grid(coord(0))(coord(1)) < 1:
            if key not in is_counted:
                is_counted(key) = 1
        else:
            if key not in is_counted:
                is_counted(key) = 1
                #find_von_neumann(grid, coord(0), coord(1), n, rows, cols)


neighbors = populate_dict((

(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1),(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1),
), 2)

I went with a hash table as my data structure but I believe that currently the worst case runtime would look very similar to O(n^2). For example if N (the distance factor) is equal to the amount of rows or columns AND every single value is positive, its going to be inefficient. Am I missing something with my solution, for example could using dynamic programming possibly help cut down on the runtime or is this already a good solution?

I’m fairly new to algorithms and runtime analysis so please correct me if I messed up on anything in my own analysis of this , thanks!

performance – How do I optimize the bubble sort in Assembly8086?

I tried to implement bubble sort in Assembly 8086.

datasg      SEGMENT BYTE 'data'
array       DB 1, 3, 2, 5, 4
n           DW 5
datasg      ENDS
stacksg     SEGMENT BYTE STACK 'stack'
            DW 12 DUP(?)
stacksg     ENDS
codesg      SEGMENT PARA 'code'
            ASSUME CS:codesg, DS:datasg, SS:stacksg
MAIN        PROC FAR

; Pushing the previous data segment to keep it secure.
            PUSH DS
            XOR AX, AX
            PUSH AX
            MOV AX, datasg
            MOV DS, AX
;SI = i
            XOR SI, SI
            MOV CX, n
            DEC CX
out:        PUSH CX; Pushing CX to the stack before entering the second for loop
            XOR DI, DI
            MOV CX, n
            DEC CX
            SUB CX, SI
in:         MOV AH, array(DI)
            CMP AH, array(DI+1)
            JLE if_end
            XCHG AH, array(DI+1)
            MOV array(DI), AH
if_end:     INC DI
            LOOP in
            POP CX
            INC SI
            LOOP out
            XOR SI, SI
; Some garbage code to move array elements to AL register one by one to see them while debugging.
            MOV AL, array(SI)
            INC SI
            MOV AL, array(SI)
            INC SI
            MOV AL, array(SI)
            INC SI
            MOV AL, array(SI)
            INC SI
            MOV AL, array(SI)
            RETF
MAIN        ENDP
codesg      ENDS
            END MAIN

It seems to be working for the given example in the above code. I also tried it with different arrays and they all seem to work.
I just want to learn if there is a way to improve it? Improvements like changing JMP codes to decrease the size of code or using AX with XCHG because that is faster.

I also can’t comprehend the idea of pushing CX to stack for using nested-for loops. If you would give some suggestion about it I would be very happy.

optimization – Are lessons on tail recursion transferable to languages that don’t optimize for it?

I’m currently reading through Structure and Interpretation of Computer Programs (SICP). During the course of that book, the lesson of “you can optimize recursive procedures by writing them as tail recursive” is drilled in to the reader again and again. In fact, I’m nearly 100 pages in and have yet to see a single for or while loop – it’s all been recursion. This is starting to worry me. To my knowledge, optimizing tail calls in a way that effectively turns tail recursive procedures in to iterative procedures is not a common feature in modern programming languages. This gives me my question: If I’m using a language that does not optimize for tail recursion, how can I apply these lessons that SICP has been teaching? Is the knowledge at all transferable?

query performance – Optimize MySQL insertion – insert multiple records with condition (per record)

although not new to SQL, up until now I used the basic stuff – basic crud, joining tables etc. I have a special case now where I’m expecting a lot more insertions that goes something like this (pseudocode):

foreach(id_of_student in students){
    {class} = students(id_of_student).class;
    {grade} = students(id_of_student).grade;
    INSERT INTO grades (id_of_student,class,grade) VALUES({id_of_student},{class},{grade})
    WHERE grade.notSetManually=1 ON DUPLICATE KEY UPDATE
}
  1. Of course I would like to not run it as a loop and access the db 50 times for each grade sheet (my app checks up to 50 grade sheets per minute). Can this be done in one big query, in a more optimized way? I saw that there is LOOP in mysql, but is it just syntactic sugar when it comes to the burden on the DB?

  2. Another thing – this ON DUPLICATE KEY UPDATE means that as long as my key is {id_of_student},{class}, this will update and not add new line, right?

I will optimize wordpress speed to boost SEO rankings for $40

I will optimize wordpress speed to boost SEO rankings

Reports from Google indicate that if your site doesn’t load within 3 seconds, the web visitor is likely to turn away. Your slow WordPress site is hurting your Google rankings and losing you customers and sales.Save time, High Google Rankings, High Conversions, Decrease in Bounce Rate and Increase in Sales with our WordPress Speed Optimization Service.

My gig includes but not limited to:

✪ Optimize images

✪ Browser Caching

✪ Avoid bad requests

✪ Database Optimization

✪ WordPress Security

✪ Page Caching

✪ Eliminate render-blocking JavaScript and CSS

✪ Plugin Audit

✪ MySQL Tuning

✪ Cloudflare Setup

✪ Anti-heartbeat Setup

✪ WooCommerce Speed Optimization

✪ Apache configuration
Bugs Free

✪ Cache Preloading from XML Sitemap

✪ Baseline and Completion Speed Assessment GTMetrix Report
another benefit includes:

Bonus advice on how to keep the speed up all the time.

Order Today to make it one of the FASTEST sites in the world! and make More Sales

.

c – Does the gcc optimize out local arrays the same way it would optimize out a local variable?

In your specific example, no, the array would not be optimized out because it gets used in the code (its address gets passed to sendBufferOverUSB).

I know that arrays are created at compile time

Nothing’s created at compile time. When you run the program, anything declared at file scope (outside the body of a function) or with the static keyword is created when the program is loaded and released when the program exits (those objects have static storage duration). Objects with auto storage duration are created when their enclosing block is entered and released when that block is exited1. Objects with allocated storage duration are created by a call to malloc/calloc/realloc and released by a call to free2.

In the code

void func(void)
{
    uint8_t tempBuffer(2) = {0x00, 0x01};
    sendBufferOverUSB(tempBuffer, 2); //sendBufferOverUSB(uint8_t* arr, int sizeArr);
}

tempBuffer has auto storage duration – it only exists for the lifetime of its enclosing function. Space for tempBuffer is set aside on function entry and released on function exit (meaning if you try to return a pointer to that array, that pointer is not valid after the function exits). By contrast, in the code

void func(void)
{
    static uint8_t staticBuffer(2) = {0x00, 0x01};
    sendBufferOverUSB(staticBuffer, 2); //sendBufferOverUSB(uint8_t* arr, int sizeArr);
}

staticBuffer has static storage duration – space for it is set aside on program startup and released on program exit, and the initialization is only performed once at program startup.


  1. Logically speaking, anyway. It’s common practice to allocate space for all auto variables at function start and release it at function exit, regardless of whether those variables are limited to an inner scope.
  2. There’s also a thread storage duration, but I don’t have enough experience with it to say anything meaningful about it.

query performance – MySQL – how to optimize select within a range bound by values in 2 different columns – innodb

How do I optimize select within a range bound by values in 2 different columns with MySQL innodb? I’m matching words to the pages on which they were printed.

Each instance of a word points to only 1 page. I need to point to the page table from the word instance table. To populate that foreign key, we should compare offsets saved from the file that contained the pages of words.

page - table with a row for each page of a book
  pid - sequential id
  bid - book id  (foreign key)
  offset_min - offset in text file where the page begins
  offset_max - offset in text file where the page ends
  page - (could be roman numeral)

word_instance - table with a row for each instance of a tracked word found in a book
  wiid - sequential id
  bid - book id (foreign key)
  wid - word id (foreign key)
  offset - offset in text file where the word instance begins
  pid - foreign key to above page table (How to update this efficiently?)

To populate word_instance.pid one book at a time, I’d like to do something like this, but I can’t find an example of how it’s done in MySQL:

  UPDATE word_instance
  INNER JOIN page on page.offset_min >= word_instance.offset and page.offset_max <= word_instance.offset
  SET word_instance.page.pid = word_instance.pid
  WHERE page.bid = <some id> and word_instance.bid = <some id>

I am new to MySQL and thinking about design. Am I on the right track? Is there anything special to do with the page table indexes to make this perform better?

I will create and optimize your YouTube with complete SEO for $10

I will create and optimize your YouTube with complete SEO

Welcome to my best YouTube channel SEO service

Are you looking for someone to promote and SEO your YouTube channel or Videos? To grow and rank your channel I use white hat SEO methods suggested by the new improvements to YouTube’s algorithms. Metadata is important for content and videos. It includes information about a video such as a title, Description, and Tags which helps your video stand out and get found by the algorithm. I’ll do YT S.E.O along with channel monetization, increasing subscribers, video promotions, S.E.O, and full channel customization to rank your video and grow the channel.

My services :
✅Title Optimization

✅Strong Strategic Description

✅Keywords/Hashtags Research

✅Improve SEO Score

✅Optimize For Organic Views

✅Channel Tags

✅Add End Screen

✅Add Cards

Why Chose me ?
✅Provide report & screen shot

✅ 100% satisfaction guarantee

✅ If you’re not satisfied Money back guarantee

✅ 1 Month free support

✅ Timely Deliver

Note : If you have any question about YouTube SEO simply drop the message & get a reply within a second.

Looking forward to hearing from you.

Thank you

Best regards
MD Apurbo Rahman

.

performance – How could I optimize that palindrome code in C?

Hello I am making a code in C to find the value of the last sequence of palindromes of a specified size (d) and I need to optimize this code as it is for an exercise platform called URI Online Judge and is giving Time Limit exceeded.

https://www.urionlinejudge.com.br/judge/pt/problems/view/1686

#include <stdio.h>
 
int main() {
    
    int n, d, result, i, j, k, value;
    char palavra(100000);

    while (scanf("%d %d", &n, &d) != 0) {

        if (n == 0 || d == 0) {
            return 0;
        }

        getchar();
        fgets(palavra, n, stdin);
        getchar();

        result = 0;
        if (n != d || d == 1) {
            for (i = 0; i < (n - d) + 1; i++) {
                value = 1;
                j = i, k = (i + d);
                while (j < k) {
                    if (palavra(j) != palavra(k - 1)) {
                        value = 0;
                        break;
                    }
                    j++, k--;
                }
                if (value) {
                    result = i + d;
                }
            }
        } else {
            result = n;
        }
        printf("%dn", result);
    }
}

I have made some changes, but still giving Time Limit exceeded.

#include <stdio.h>
 
char palavra(100000);

int palindrome(int inicio, int fim) {
    while (inicio < fim) {
        if (palavra(inicio) != palavra(fim - 1)) {
            return 0;
        }
        inicio++, fim--;
    }
    return 1;
}

int main() {
    
    int n, d, result, i;

    while (scanf("%d %d", &n, &d) != 0) {

        if (n == 0 || d == 0) {
            return 0;
        }

        getchar();
        fgets (palavra, n + 1, stdin);
        getchar();

        result = 0;
        if (n != d || d != 1) {
            for (i = 0; i <= (n / 2); i++) {
                if (palindrome(n - d - i, n - i)) {
                    result = n - i;
                    break;
                }
                if (palindrome(i, i + d)) {
                    result = i + d;
                }
            }
        } else {
            result = n;
        }
        printf("%dn", result);
    }

}
```

java – How to optimize animation in a libgdx game

I have an animation consisting of 100 frames, each frame is a separate picture with a resolution of 800×450, and this animation should be played in the background throughout the entire game. How can you optimize this animation if you cannot reduce the number of frames and the resolution of each frame?