★ HostnExtra.com ★ High performance, superior and affordable cloud servers for $ 5 / month

★ HostnExtra.com ★ High performance, superior and affordable cloud servers for $ 5 / month

HostnExtra High performance Inte Xeon hardware with 100% SSD storage. OpenStack-based cloud computing solutions for companies and developers. Cloud computing scalable and feature-rich IaaS.

We provide the best performance, uptime and redundancy. Our network offers low latency routes. We provide enterprise level infrastructure to boost hardware performance and experience. We use cloud computing based on OpenStack. OpenStack software controls large groups of computing, storage and network resources throughout a data center. OpenStack is a reliable, scalable and complex project.

Insurance. Higher. Speed.

We grant 100% solid state drive storage to all virtual machines. Increase the performance and scalability of virtual machines.

Choose and configure according to your requirements –

OPTION 1: Cloud server – c.tiny-1

Cores: 1 vCPU
RAM: 512 MB
Disk space: 10GB SSD
Uplink Port: 1 Gbps

Price: $ 5 / month

Order now

OPTION 2: Cloud server – c.tiny-2

Cores: 1 vCPU
RAM: 1 GB
Disk space: 30 GB SSD
Uplink Port: 1 Gbps

Price: $ 7 / month

Order now

OPTION 3: Cloud server – c.small-1

Cores: 2 vCPU
RAM: 2 GB
Disk space: 50 GB SSD
Uplink Port: 1 Gbps

Price: $ 12 / month

Order now

OPTION 4: Cloud server – c.small-2

Cores: 2 vCPU
RAM: 4 GB
Disk space: 70 GB SSD
Uplink Port: 1 Gbps

Price: $ 20 / month

Order now

[B] OPTION 1: Cloud server – c.large-1 / B]
Cores: 4 vCPU
RAM: 4 GB
Disk space: 80 GB SSD
Uplink Port: 1 Gbps

Price: $ 25 / month

Order now

OPTION 1: Cloud server – c.large-2

Cores: 4 vCPU
RAM: 8 GB
Disk space: 90 GB SSD
Uplink Port: 1 Gbps

Price: $ 40 / month

Order now

For more plans visit our server page in the cloud. Dual Intel Xeon CPUs Redundant A + B power supplies 4 Gbps of redundant bandwidth. Add additional storage or resources instantly with just a few clicks.

How did someone test CanoScan Lide 400 Slim to digitize 35mm negatives? What do you think of your performance for the task?

I am an amateur in the analog photography scene and I don't have the resources to "build" an analog laboratory where I live. However, I want to do some experiments with some developed film that I have at home and then digitize the negative to process the results. Now, the problem is that this will be something experimental and I would like to buy a scanner with a good value for money. During this investigation, I saw this scanner that was mentioned several times, but when I see the reviews, most people buy it to scan documents and other things, and I'm having doubts. How did someone try it with 35mm film?

How to improve the performance of a website?

Guys, I've been checking some popular blogs and sites in Pingdom, checking their upload speeds and performance.

Then I checked the performance of my site and it is 67.

Popular blogs have a performance rating of 87 and these sites load in 1 second.

So, I'd like to know how I can improve the performance rating, at least from 75 to 80. Will you need additional costs, accessories or tools?

Any suggestions?

performance – C # code to find all divisors of an integer

I did this to be able to feed it with integers and return a matrix with all the divisors of that integer. I put checks in case the integer is less than 2. The order of the integers in the matrix must be from least to greatest. The code works What I need is to optimize this code to be as fast as possible. At this time, in an online IDE, I get about 40 ms combined with the given test. I need to trim this as much as possible.

using System.Collections.Generic;

public class Divisors
{
    public static bool IsPrime(int n)
    {
        if (n == 2) return true;
        if (n % 2 == 0) return false;

        for (int x = 3; x * x <= n; x += 2)
            if (n % x == 0)
                return false;

        return true;
    }

    public static int() GetDivisors(int n)
    {
        List divisors = new List();

        if (n < 2)
        {
            return null;
        }
        else if (IsPrime(n))
        {
            return null;
        }
        else
        {
            for (int i = 2; i < n; i++)
                if (n % i == 0)
                    divisors.Add(i);
        }

        return divisors.ToArray();
    }
}
namespace Solution 
{
  using NUnit.Framework;
  using System;

  (TestFixture)
  public class SolutionTest
  {
    (Test)
    public void SampleTest()
    {
      Assert.AreEqual(new int() {3, 5}, Divisors.Divisors(15));
      Assert.AreEqual(new int() {2, 4, 8}, Divisors.Divisors(16));
      Assert.AreEqual(new int() {11, 23}, Divisors.Divisors(253));
      Assert.AreEqual(new int() {2, 3, 4, 6, 8, 12}, Divisors.Divisors(24));
    }
  }
}

performance – MySQL – What options in the configuration file have an impact on memory usage?

I have been wondering how to manage the use of MySQL memory, since, by default, it occupies up to 350 MB while it is idle on my machine, I have no memory problems, I honestly wondered how it could be done.

I found multiple answers on how to adjust the configuration file, they worked as planned, one of them even reduced the memory usage to 100 MB.


Questions

one.- Which of the options most affects memory usage?

two.- Where can I get information about the performance impact of these options? (documentation / books / anything)


Sample configuration file, MySQL only takes 100 MB (it's a Docker container)

(mysqld)
performance_schema = 0
skip-host-cache
skip-name-resolve
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
datadir = /var/lib/mysql
secure-file-priv = NULL
skip-external-locking
max_connections = 100
connect_timeout = 5
wait_timeout = 600
max_allowed_packet = 16M
thread_cache_size = 128
sort_buffer_size = 4M
bulk_insert_buffer_size = 16M
tmp_table_size = 32M
max_heap_table_size = 32M
myisam_recover_options = BACKUP
key_buffer_size = 128M
table_open_cache = 400
myisam_sort_buffer_size = 512M
concurrent_insert = 2
read_buffer_size = 2M
read_rnd_buffer_size = 1M
long_query_time = 10
expire_logs_days = 10
max_binlog_size = 100M
default_storage_engine = InnoDB
innodb_buffer_pool_size = 32M
innodb_log_buffer_size = 8M
innodb_file_per_table = 1
innodb_open_files = 400
innodb_io_capacity = 400
innodb_flush_method = O_DIRECT

(mysqldump)
quick
quote-names
max_allowed_packet = 16M

(isamchk)
key_buffer = 16M

Performance: nested loop optimization to calculate geodetic distance

I segmented an image in N superpixels and built a graph based on what each super pixel considered as a node. Information about neighboring superpixels is stored in glcms training. The weight between each pair of neighboring superpixels is stored in a matrix W.

Finally, I want to calculate the geodetic distance between non-adjacent superpixels using graphshortestpath function. The following codes perform the aforementioned process, however, it takes a long time. Specifically, the last section of the code that calculates the geodetic distance takes longer than expected (more than 15 seconds).

    Img=imread('input.jpg');
    (rows, columns, numberOfColorChannels) = size(Img);
    (L,N) = superpixels(Img,250);

    %Identifying neighborhood relationships
    glcms = graycomatrix(L,'NumLevels',N,'GrayLimits',(1,N),'Offset',  (0,1;1,0)); %Create gray-level co-occurrence matrix from image
    glcms = sum(glcms,3);    % add together the two matrices
    glcms = glcms + glcms.'; % add upper and lower triangles together, make it symmetric
    glcms(1:N+1:end) = 0;    % set the diagonal to zero, we don't want to see "1 is neighbor of 1"

    data = zeros(N,3);
    for labelVal = 1:N
    redIdx = idx{labelVal};
    greenIdx = idx{labelVal}+numRows*numCols;
    blueIdx = idx{labelVal}+2*numRows*numCols;
    data(labelVal,1) = mean(Img(redIdx));
    data(labelVal,2) = mean(Img(greenIdx));
    data(labelVal,3) = mean(Img(blueIdx));

    end    

   Euc=zeros(N);
% Euclidean Distance
for i=1:N
    for j=1:N
        if glcms(i,j)~=0
            Euc(i,j)=sqrt(((data(i,1)-data(j,1))^2)+((data(i,2)-data(j,2))^2)+((data(i,3)-data(j,3))^2));
        end
    end
end


W=zeros(N);
W_num=zeros(N);

W_den=zeros(N);
OMG1=0.1;
for i=1:N
    for j=1:N
        if(Euc(i,j)~=0)
         W_num(i,j)=exp(-OMG1*(Euc(i,j)));
         W_den(i,i)=W_num(i,j)+W_den(i,i);
        end
    end
end

for i=1:N
    for j=1:N
         if(Euc(i,j)~=0)
             W(i,j)=(W_num(i,j))/(W_den(i,i));   % Connectivity Matrix W

         end
    end
end


s_star_temp=zeros(N);   %temporary variable for geodesic distance measurement
W_sparse=zeros(N);
W_sparse=sparse(W);
for i=1:N
    for j=1:N
        if W(i,j)==0 & i~=j;
            s_star_temp(i,j)=graphshortestpath(W_sparse,i,j); % Geodesic Distance
        end
    end
end


The question is how to optimize the code so that it is more efficient, that is, it requires less time.

sql server: creating an index that is not used by a query (SELECT) reduces the performance of that query

I just saw this video of Pinal Dave.

He has a SELECT query that produces ~ 370k readings in tempdb and ~ 1200 readings of the table in which the query is made SELECTing of

Then create an index (let's call it, Index1) that eliminates the tempdb reel and therefore improves query performance. Everything is OK until now.

However, then create an additional index (we'll call it Index2) and leave the Index1 intact.

Then rerun your query and, despite Index2 not used, query performance returns to its original state, with the spool ~ 370k tempdb still in place.

Actually, it doesn't seem to describe what causes this (unless I missed it)

What would cause this behavior?

Performance: Excel offers a non-optimal solution

I am minimizing basic costs for a transport problem, however, the Excel solver (LP Simplex) is giving me a solution that is almost double that of an alternative solution I found for the problem. Is there any explanation why Simplex is not giving the global minimum?
Here is the file, on the left is the result of Solver and on the right you have the result entered manually
https://drive.google.com/open?id=1MdYffQNuyfRtKX8p6l_X6iZKOTg0Prjw

performance: simplify the code of this list program

Hello, with this program, I am asking for a number and I have to create so many nodes and print each digit its value. If I write 123, the output should be 1 -> 2 -> 2 -> 3 -> 3 -> 3 if the number given in the input is negative, the list should be created by adding the nodes in the head.

#include 
#include 

typedef struct Node{
    int val;
    struct Node *next;
} node;

int main(){

    char num_str(64) = {0};
    int num, cifra, *p, counter=1;

    printf("Write a number: n");
    scanf("%d", &num);

    int len = snprintf(num_str, 64, "%d", num);
    printf("The length is %dn", len);

    p = (int *)malloc(len*sizeof(int));

    for(int i = 0; i < len; i++) {
        cifra = (int)(num_str(i))-(int)('0');
        p(i) = cifra;
    }

    node *head = NULL;
    head = (node *)malloc(sizeof(node));
    node *temp = head;

    if(num_str(0)!='-'){
        for(int i=0; i < len; i++){
            for(int j=p(i); j > 0; j--){
                temp->val = p(i);
                temp->next = (node *)malloc(sizeof(node));
                temp = temp->next;
                temp->next = NULL;
                counter++;
            }
        }
    }

    else{
        for(int i=0; i < len; i++){
            for(int j=p(i); j > 0; j--){
                temp = (node *)malloc(sizeof(node));
                temp->val = p(i);
                temp->next = head;
                head = temp;
                counter++;
            }
        }
    }

    temp = head;

    while(temp->next != NULL){
        printf("%d -> ", temp->val);
        temp = temp->next;
    }

    printf("n");

    printf("I have created %d nodesn", counter);

    return 0;
}
```

performance tuning – how to speed up time[4^(10^10)]

I have an old computer with an AMD Phenom II quad-core processor (3Ghz). When I run Timing(4^(10^10))The computer takes hours to evaluate the expression. I am not sure how long, because Windows decided that my computer was idle, installed updates and restarted, losing my job :(. I am currently running the process again on Linux. However, I notice on Linux that Mathematica Just use a core. What can I do to speed up the process? I have seen that using floating point, that is Timing(4.^(10^10))It might work, but this is for school, so I have to use the code as written.