magento2: Error resetting the Magento 2 connection when processing large data

I'm using Magento 2.3 on ubuntu 16.4 with the apache server. I'm synchronizing Dropbox csv data in Magento 2 attributes. Everything works fine, but I have more than 17,000 SKUs and I have to process them all at once.

My code below.

public function downloadcsv () {
$ curl = curl_init ();

curl_setopt_array ($ curl, array (
CURLOPT_URL => "https://content.dropboxapi.com/2/files/download",
CURLOPT_RETURNTRANSFER => true,
CURLOPT_ENCODING => "",
CURLOPT_MAXREDIRS => 10,
CURLOPT_TIMEOUT => 30,
CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,
CURLOPT_CUSTOMREQUEST => "POST",
CURLOPT_HTTPHEADER => array (
"Authorization: DropboxAuthorizationKey Carrier",
"cache-control: no-cache",
"content type: text / plan",
"dropbox-api-arg: {" path  ": " / test.dat  "}",

)
));

$ answer = curl_exec ($ curl);
$ err = curl_error ($ curl);

curl_close ($ curl);

if ($ err) {
echo "cURL Error #:". $ err;
} else {

// echo $ response;
$ fp = fopen ("/ var / www / html / csv / test.csv", "wb");
fwrite ($ fp, $ answer);
fclose ($ fp);
}

}

csv_to_multidimension_array () function
{
$ objectManager =  Magento  Framework  App  ObjectManager :: getInstance ();
$ filename = & # 39; / var / www / html / csv / test.csv & # 39 ;;
$ delimiter = & # 39;, & # 39 ;;

$ result = array ();


$ keys = array (& # 39; sku & # 39 ;, & # 39; Description & # 39 ;, & # 39; LocalStock & # 39 ;, & # 39; empty1 & # 39 ;, & # 39; Gross price & # 39; , & # 39; Discount & # 39 ;, & # 39; Net Price & # 39 ;, & # 39; Specials & # 39 ;, & # 39 ;, & # 39; Minor Group & # 39 ;, & # 39; empty2 & # 39;, & # 39; empty3 & # 39;, & # 39; empty4 & # 39;, & # 39; Cost & # 39;, & # 39; empty5 & # 39;, & # 39; Division & # 39;, & # 39; NET & # 39;, & # 39; Barcode & # 39;, & # 39; Pack Size & # 39;, & # 39; empty5 & # 39;, & # 39; National Stock & # 39;);

foreach (file ($ file name) as $ key => $ str)
{
// yes ($ key == 0)
// continue; // jump the first line

$ values ​​= str_getcsv ($ str, ",", & # 39; "& # 39;);

$ result[] = array_combine ($ keys, $ values);

}

foreach ($ result as $ res) {

$ cat_array = explode (& # 39; & # 39 ;, $ res['Specials']);
$ sku = $ res['sku'];
$ productId = $ objectManager-> get (& # 39;  Magento  Catalog  Model  Product & # 39;) -> getIdBySku ($ res['sku']);
$ products = $ objectManager-> create ("Magento  Catalog  Model  Product") -> getCollection () -> addAttributeToFilter (& # 39; entity_id & # 39 ;, array (& # 39; eq & # 39; => $ productId));

yes ($ res['LocalStock'] ! = & # 39; & # 39;) {
$ localqty = $ res['LocalStock'];
$ resource = $ objectManager-> get (& # 39; Magento  Framework  App  ResourceConnection & # 39;);
$ connection = $ resource-> getConnection ();

$ sql = "UPDATE inventory_source_item SET amount = $ localqty WHERE sku = & # 39; $ sku & # 39; AND source_code = & # 39; sample & # 39;";
$ connection-> query ($ sql);

}

I was able to update everything using this method but when the data is large I could not update. It charges for a long time and gives a connection reset error.

sql server – Errors in the OLAP storage engine: A duplicate attribute key was found when processing

When I try to process my cube and specifically the Employee_DIM, I get the following error:

Errors in the OLAP storage engine: a duplicate attribute key has been
found when processing: Table: & # 39; dbo_Employee_DIM & # 39 ;, Column: & # 39; Name & # 39 ;,
Value: & # 39; Aurélie & # 39; The attribute is & # 39; Name & # 39;

I think the duplicate values ​​for the Name column are considered keys. After consulting with Employee_DIM:

    SELECT [Firstname]ACCOUNT ([Firstname]) AS dup_count
SINCE [Database].[dbo].[Employee]
        GROUP BY [Firstname]
             HAVE (ACCOUNT ([Firstname])> 1)
ORDER BY [Firstname]

enter the description of the image here

Is there an alternative to leaving only the business key in Employee_DIM and get the last name and first name without adding them in the dimension for future use?

post processing: how is * really * used a dark frame?

Immediately, I must mention that lunar photography is different from astrological photography of deep-sky objects. The types of frames you are describing (calibration tables) are extremely useful for deep-sky objects, but not so useful for lunar photography.

You probably do not have to worry too much about noise in lunar photography, since you can take those images in the ISO base and use very short exposures (noise should not be a major problem).

As to why the frames are "blue", you should provide more information about the equipment used. Did you use any filter (like a light pollution filter)? I have noticed several types of light pollution filters (such as CLS filters, UHC filters and others) that put a strong color contrast on the image because they cut out parts of the color spectrum.

As for the dark, flat, polarization frames, etc., you probably do not need them for the lunar images, but I can explain the purpose of each and how to collect the data.

It helps to understand what are the different types of frames that we collect in astrophotography (very different from typical photography) and why you would collect those frames (Spoiler: the calibration frames are especially useful in images where you need to "stretch" the histogram to get details).

Lights

The light frames are the normal frames … with a nuance that could be limited to certain parts of the spectrum. A camera without a filter would be sensitive to both IR and UV. A "luminance" filter collects the full visible spectrum (approximately 400 nm at 700 nm wavelengths) but includes UV blocking and the IR blocking filter.

A color camera has an integrated color filter matrix (CFA) (the most common type is a Bayer matrix) and this can produce a full-color image in a single photograph. But you can create color images with a monochrome camera by taking separate images with Red, Green and Blue filters … and then merge the data into the software. Regardless of whether you use a color or monochrome camera, all images are a variant of "clear" frames.

Dark

Dark frames are image shots with the same configuration as light frames … except with the camera covered (lens cap or body cap) so that the sensor can not pick up any light.

The purpose of doing this is because all the images have noise. The most common type of noise is read noise, but noise can also be generated as a result of heat buildup (thermal noise) and camera sensors may exhibit pattern noise. The thermal noise will be greater in the images of greater exposure.

The idea behind the dark ones is to give the software a collection of images that only contain noise. Give enough samples and you can calculate the amount of noise you can expect and you can do a better job of subtracting the noise from the "clear" frames.

Dark frames do not need to use identical exposure settings (the same ISO, the same duration … stops stopping does not matter, since no light comes through the lens). But they must be fired at the same physical operating temperature, since the amount of noise will vary according to the temperature. If you shoot your lights at night and wait until the next day to collect the dark ones, the differences in temperature may cause them not to be representative of the amount of noise naturally present in your lights.

Floors

The planes (and this is what I think I was looking for with the "blue" frames) are mainly intended to detect two things … The # 1 is vignetted on the sensor (the notion that the frame can be darker near the corners and edges) and # 2 is dust-bunnies … bits in your sensor that block light.

The reason for the collection floors is that the deep sky objects are weak and the images need some further processing work to unravel the details. An important aspect of clarifying the details is to "stretch" the histogram. When you do this, the very subtle differences in tonality in the data directly from the camera will be stretched and exaggerated, so that the tonal differences will no longer be subtle … they will be obvious. This means that the subtle amounts of vignetting will now be obvious amounts of vignetting. Specifications of dust that a small annoyance will be a major annoyance in the stretched image. (The way an unstretched image is sometimes called linear data and a stretched image is sometimes called nonlinear data because the histogram is usually stretched non-linearly.) There are certain steps of post-processing that should only be done with Linear data (not stretched).

There are several ways to pick up flats. One method is to stretch a clean white cloth on the front of the lens or the telescope … without wrinkles like a drumhead. Point the camera or telescope at an area of ​​the sky opposite the sun (if the sun is setting in the west, point the scope or camera to an area of ​​the sky with no distinctive features in the east.) This will give you an amount of light pretty uniform) on the fabric. I've also done it using white (clean) plastic garbage bags, but it usually requires several layers and care must be taken to make sure there are no wrinkles. There are high-end flat-field generators. I've also met people who use an iPad screen … that is plain white … and take a picture of that (it has to be perfectly illuminated.) If the screen is damaged and the light is not uniform, then it is not it will work .

Do not try to focus the telescope for the plane (just leave it focused to infinity). You can not focus on something near the telescope and changing the focus will alter the vignette pattern.

In a telescope, the focal relationship is not something that can easily change. But if you use a camera, the focal ratio should be the same focal ratio (f-stop) that he used for his lights. This is because the vignetting pattern will vary depending on the f-stop.

If you remove and reposition the camera in a telescope (or rotate it), the vignetting pattern can (and usually does) change and that means you may need another set of planes.

Bias

This is a little more nuanced. If you turn on the camera's sensor and immediately read the data without actually taking a picture, you will discover that the pixel values ​​(or ADU values) are not really zero. CCD image cameras often have a function that allows you to capture a polarization frame. With traditional cameras, simply leave the lens cap on and take the shortest possible exposure (for example, 1 / 4000th of a second, etc.) and that's close enough because that amount of time is not really enough to get the type of noise that would be expected. A true "dark" frame.

Shoot several of these (enough to be a significant statistical sample). They are integrated to produce a master polarization frame. You can shoot biased frames at any time (it is not necessary to capture them during your image execution). They should be taken with the same ISO value as the lights, but the duration of the exposure should be as close to 0 as the images. The camera will allow it.

Why?

I mentioned at the beginning that the main reason for all these additional types of frames has to do with helping the computer software to deal with its image, especially with regard to the expansion of its data.

Postprocessing

When you use the software to post-process the data, there are a series of steps that you perform through the software. For deep sky objects, the free program "Deep Sky Stacker" is popular (use a commercial program called PixInsight). The software will ask you to feed it with all the frames … lights, dark, flat and polarization frames.

The first step that the software will perform is to integrate all types of calibration frames to produce master versions of each of these (all dark ones are merged into a "master darkness", all polarization frames are combined in a frame of " master polarization ", etc.).)

The second step that the software performs is to calibrate each of the light frames. This means that you will use your master polarization and master darkness to help correct the noise problems (it will not be perfect) and use the master planes to correct the uneven lighting so that you get the same illumination through the frame of each light (any uneven). ) the tonality in the image is real data of the object that you created and not only the result of vignetting or dust). This step produces a new copy of each "light" box that is now called "calibrated light".

The third step is to register each of the calibrated light frames. If you are shooting deep sky objects, then you will have many stars. The positions of each star will be used to align each frame so that they all match. This may require a bit of data push (and surely it will if you enabled the screening while capturing the image, but that's another issue) to ensure that all the frames are aligned. This results in another new copy of each image … called "registered calibrated light".

The fourth step is integration. In this step, each registered and calibrated light will be combined. This could be done with a simple average. But with enough samples there are better integration algorithms. The integration looks at the same pixel in each frame of the input data. Let's suppose that the pixel in which we are integrated is located 10 rows below and 10 columns. We look at that same Pixel (same point) in each image. Suppose that this is supposed to be the background of space, so the pixel should be almost black. And suppose also that in 9 of the 10 input boxes it is almost black. But in a single picture it is almost white (due to noise). If we "average" all 10 pixels, the noisy pixel will be reduced to only 1/10 of its previous brightness. This reduces perceptible noise.

There are better algorithms if you have enough data to be statistically significant. The "sigma clipping" method establishes a statistical mean and a devotion to the mean and this can have surprising results. Suppose we map our ADU values ​​in brightness percentages and suppose that in 9 out of 10 frames the pixel brightness is around 3-5%. But suppose a plane flew through a frame and that pixel was very bright … 98%. The statistical method would determine that 98% is too atypical considering that the rest of the set has values ​​in the range of 3-5%. The design of these outliers should be ignored (it will probably be replaced with the average value). This means that you can still combine that tenth frame in which the plane flew and the software will completely eliminate the plane (with the averaging method you would see a very weak airplane trace … with the sigma clipping it will disappear completely). This is an area where the software is magical (well … not magical, it's mathematical … but It seems as magic.)

At this point, he finally has a "master light" frame … the combined result of all his image acquisition work. At this point, you are likely to give that image a soft clipping (to get rid of the irregular edges created when each frame was shifted to align all the stars) and then begin to process the data artistically to produce the result you want (The majority of the mechanical processing steps that tend to be a bit more automated are completed.)

** Moon Photography *

When you make lunar or planetary images, the duration of the exposure is very short (a fraction of a second). The subjects are brilliant. Images do not need much in the "stretch" mode.

Because of this, it is not usually necessary to collect polarization frames or dark frames. You could collect flat frames to help with the dust bunnies, but you probably do not need to "stretch" the data in such a meaningful way that solving the problems of vignetting is a problem. This means that you can omit the flat frames.

When you make lunar or planetary images, these very fast exposure times mean that the exposure is not long enough to see stars (if you ever see lunar or planetary images that have stars … the image is probably a composite photo). No stars means you can not use the star alignment to "register" the frames.

Data acquisition usually involves capturing a small amount of video data (perhaps 30 seconds). The ideal is to use a camera with a global shutter and a reasonably high frame rate. DSLRs are not usually very good because video frames tend to be compressed frames instead of RAW frames.

The stacking of lunar and planetary images requires different software. The free products that do this are Registax and AutoStakkert. AutoStakkert is a bit more popular these days for "stacking" but does not perform the steps after processing (for that you would need a different software). Registax does the stacking and many of the subsequent processing steps, but its stacking system does not seem to be as good as AutoStakkert. For this reason, many people put the data through AutoStakkert first to get the combined image … then open it in Registax for further processing. There are not free applications that can also be used.

The lunar and planetary stacking attempts to align the frames according to the circular disk of the object and also to find features that show a little contrast and try to align them. The problem is … the atmosphere will make the moon seem to wobble (as if looking at the image resting at the bottom of a puddle of water with a gentle movement of the waves).

Before integrating the data, you generally want to find some good representative frames and scan the rest of the frames for data of similar quality (the contrast features are in similar positions). Basically, it's about finding the best frames (closest matches) and discarding the rest. You could ask him to take the best 10% of frames. These better frames can be combined and, in general, result in a much better result than you would get with a single frame.

I often take lunar photographs of the entire moon with a single frame. If you had to use a very enlarged image (just a crater or a feature), you could capture a 30-second video clip and process the data.

post processing: How can I "subtract" a layer from an image in photoshop?

Total beginner here who does not even know the basic terminology for photo editing, so I ask for help here and I do not look for it in Google.

I have a picture of the moon, but it's blue. I have another image that is also blue. Both are raw data from a camera mounted on a telescope. I have another "zero" image where the telescope does not point to any particular place, which is meant to be used to remove the grain from the image.

But I have no idea how to do any of these things. How can I subtract the blue and flat image from the Moon image? And I guess the process is the same to remove the grain? I do not know.

Image processing – efficient algorithms for the reconstruction of destroyed documents

Your problem is NP-Complete, even for the strips (n yield strips (2n)!) Used, so people use heuristics, transformations like Hough and morphological filters (to match the continuity of the text, but this increases considerably the complexity for comparison) or any type of genetic search / NN, optimization of ant colonies.

For a summary of the consecutive steps and several algorithms, I recommend an investigation on the automated reconstruction of crushed documents using heuristic search algorithms.

The problem itself may end up in unpleasant cases, when the document is not fully sharpened (blurred, it is printed with low resolution) and the width of the strips is small and cut with a physical cutter with opaque edges, because the standard fusion methods, such as panoramic photo, improper results are lost This is due to the loss of information when losing small strips, otherwise, if you have a complete digital image cut into pieces, it is as difficult as a puzzle, the image no digital falls in the approximate search.

To make the algorithm automatic, another problem is the feeding of pieces, rarely you can give strips aligned with the axis, so to start the process it is good to enter all the stripes as an image with pieces put by hand, this imposes another problem (this is easy) to detect Blobs and rotate them.

By special shredder instead of stripes they produce very small rectangles. For comparison, the class P-1 shredder gives stripes of 6 to 12 mm width of any length (approximately 1800 mm ^ 2), class P-7 offers rectangles with an area smaller than 5 mm ^ 2. When you get rectangles instead of stripes, the problem produces (4n)! Permutations, assuming a one-sided document, if there are many fragments of unrelated documents (without images, only text) in a bag, the problem is not really manageable.

What does it mean? Visa processing hours "what you should" Select the location from which you make the request "

I feel silly asking about this, but I'm shy about the visa process now.

1) On the Gov site of the United Kingdom to view the "Visa processing times", you must "Select the location from which you make the request"

2) The way I read it, which means my city, my state or my country.

3) However, the drop-down menu offers limited cities and countries together.

a) It has some important cities like Chicago, NY and DC, but nothing near me.

b) You have an option for DHS-Vac, EE. UU., Which in Google is "Application
Support Centers "

4) What does that mean exactly?

5) Since it seems to have limited options, I really do not understand exactly what it means "Select the location from which you make the application".

Image processing – Repairing a panorama

I was on a cruise and the panorama I took was distorted due to the rocking of the boat:

enter the description of the image here

I hope to correct this so that the horizon is level. Here is the full resolution image:

pano = CloudImport @ CloudObject[
 "https://www.wolframcloud.com/objects/505ab4f5-2f3b-4284-b159-23b650ec45e6"];
{w, h} = ImageDimensions @ pano

This is what I have tried so far: basically making a Interpolation function for ImageTransformation use:

(* points chosen manually *)
horizon = {{10824.347571942448`, 1828.3412145283764`}, {8600.42723321343`, 1926.247726818544`}, {6562.4985ote of the
= {{16036.104578836932` Stable, 117.17682104316555`}, {16057.281849520385`, 3465.1293964828137`}, {8767.18545163869`, 3474.4392111310954`}, {8663.24290567546`, 173.2403202438045`}, {349.88534172661866`, 160.86133593125487`}, {343.2354741207032`, 3377.453449740208`} };
fixed = Thread @ {horizon[[All, 1]]horizon[[All, 2]]// Mean};

Highlight image[pano, {Red, Point[horizon], Blue Point[stable],
Green Line[Thread[{horizon, fixed}]]}]

enter the description of the image here

t = Thread[{horizon, fixed}]; s = Thread[{stable, stable}];
inter = interpolation[Join[t, s], InterpolationOrder -> All]F[{x_: 0, y_: 0}] : = With[{v = inter[x, y]}, {Clip[v[[1]], {0, w}]Clip[v[[2]], {0, h}]}]

So it looks like I should be correcting it:

enter the description of the image here

But then this returns a blank image:

ImageForwardTransformation[pano, f]

Signal processing – How to determine the divider for the composite sine wave

Suppose I added two sine waves $ f (x) = sin (2 x) $ Y $ g (x) = sin (2 x) $ together and then divided by 2 like this $ frac {f (x) + g (x)} {2} $. If I graph this, I get a graph where the three sinusoidal waves have a maximum amplitude of 1. However, I have discovered that dividing by 2 does not work if, for example, $ g (x) = sin (4 pi x) $. In other words, if I want the peak amplitude of this composite wave to be equal to ~ 1, I have to divide by something like 1.76. However, I'm not sure why, since I approached this number by looking at the graph. I wonder if there is a method to determine what the divisor should be to ensure that the maximum / minimum amplitude is set to a value (such as 1) after the addition of two (or ideally more) waves. Thanks in advance to everyone.

c # – Implement parallel processing in recursive algorithm

                static main vacuum (chain[] args)
{
List p = new list {1, 2, 3, 4};
Console.WriteLine (F (4, 4, p));
}

public static int F (int k, int n, List P)
{
List maxList = new list();
int ats = 0;

if (n == 0) {return 0; }
yes no (k == 1)
{
ats = 0;

for (int i = 0; i <n; i ++)
{
ats + = p[i];
}
back ats
}
plus
{
for (int i = 0; i <n; i ++)
{
ats = 0;
for (int j = i; j < n; j++)
                {
                    ats += p[j];
                }
                int funkcija = F(k - 1, n - 1, p);
                if (ats > funkcija)
{
maxList.Add (ats);
}
plus
{
maxList.Add (funkcija);
}
}
}
ats = maxList.Min ();
back ats
}

I need to change the previous algorithm to be a parallel processing. However, I can not figure out how, because when I try to change it it does not work anymore. I think it does not work because the recursions that occur in different nuclei do not give each other information.

Image processing: how recent smartphones get higher resolution images through the combination of pixels

First of all, I must affirm the fact that I do not have any particular knowledge about photography. I am not qualified to understand the technical processes.

However, recently I came across some discussions about phones that have only 12 MPx sensors and that take 48 shots of MPx (Redmi Note 7 for example). I read about this topic that I found interesting and people have mentioned the Pixel Binning technique, about which I have read. From what I understood, Pixel Binning should REDUCE the resolution of the image.

So, this question: How do recent smartphones get higher resolution images through "pixel grouping"

I still could not find a concrete answer to this question and I hope someone can explain this to me in simple terms. Thanks in advance.