python – Converts Pandas and Numpy into Dictionary and Converts back to original format for data split

I’m trying to refactor some code so that this it does not have so much repetition to it. What I’m trying to do is create an input for a multiple channel/input neural network. The features that are being considered come from two different sources completely and the inputA here is a 2D array and has to be kept in this format.

I have the following code:

'Create Input Values'
inputA= word_embeddings.numpy()
inputB = df('Features').values
y = df('Target').values

full_model_inputs = (inputA, inputB)

#Create Dictionary
original_model_inputs = dict(inputA= inputA, inputB= inputB)

'Create Train and Validation Data from Inputs'
#Preserve data dimensionality for data split
df = pd.DataFrame({"inputA":original_model_inputs("inputA"),  
                   "inputB":list(original_model_inputs("inputB"))})

#Data Split
x_train, x_valid, y_train, y_valid = train_test_split(df, y, test_size = 0.25)

#Convert back to original format
x_train = x_train.to_dict("list")
x_valid = x_valid.to_dict("list")

#Format dictionary items as arrays to be functional for model
x_train = {k:np.array(v) for k,v in x_train.items()}
x_valid = {k:np.array(v) for k,v in x_valid.items()}

Are there any suggestions to improve this code? Just want some insight from the community.

what the dictionary looks like:

{'inputA': array((40., 68., 46., ..., 60., 42., 50.)),
 'inputB': array(((-1.915694  , -2.39863253, -1.75456583, ...,  2.11158562,
          2.42145038,  1.0996474 ),
        (-1.99583805, -2.38059568, -1.94454968, ...,  2.14585209,
          2.56227231,  1.2808286 ),
        (-2.1607585 , -2.29914975, -1.85722673, ...,  2.04741383,
          2.34712863,  1.77104282),
        ...,
        (-2.1576829 , -2.28505015, -1.71492636, ...,  2.05909061,
          2.43704724,  1.90647388),
        (-1.81904769, -2.74457788, -2.15936947, ...,  2.31333733,
          2.50243115,  1.75907826),
        (-2.01300311, -2.32310271, -2.00470185, ...,  2.09641671,
          2.53372359,  1.22000134)))}

file recovery – How to get photos with smart previews out of Lightroom if the original photos cannot be located?

In some circumstances, exporting the image will work. The exported image will usually be lower-quality than the original, as this is how smart previews are stored by Lightroom.

However, sometimes, the images will not be exportable even with a smart preview. In these situations, you should go find your smart previews.

Where Smart Previews are Stored

The smart previews are stored in the Lightroom Catalog Smart Previews.lrdata folder in your Lightroom catalog folder. For more information on where Lightroom catalog folders are stored, please visit this Adobe FAQ.

Inside the folder, there are other folders, from 0—F. Inside those folders are even more folders. Inside these deep folders are the Lightroom smart previews. These smart previews are stored as DNG files. These DNG files are lower resolution and are smaller (both in file size and in pixel dimension size). On a PC, DNG files can be opened by Microsoft Photos, although by default, they will open in Snip & Sketch.

Imprimir um vetor novo mudando a ordem dos elementos do vetor original

Como faço para mudar a ordem dos elementos de um vetor v1 imprimindo um novo vetor v2 com essas mudanças?

Exemplo:

v1 = [2,51,68,10]
v2 = [51,2,10,68]

If your phone was compromised by a government body, how would you return it to it’s original state?

Say your phone was confiscated and consequently compromised, how would you ensure that it is no longer bugged?

networking – Docker: Converting an existing legacy system to Dockerized form while maintaining original network scheme

I’m in the midst of a project that made to convert an existing VOIP legacy system into a dockerized form. The existing system consists of 5 different Linux machines, each machine is having 2 different network interfaces – one exposed to the public WAN, and the other is a private Lan network. I plan on creating a docker compose file for setting up the orchestration.

The network roughly looks like this:

Server #1 Eth0: IP 192.168.0.200/24 Eth1: IP X.X.X.65/27

Server #2 Eth0: IP 192.168.0.201/24 Eth1: IP X.X.X.66/27

Server #3 Eth0: IP 192.168.0.202/24 Eth1: IP X.X.X.87/27

Server #4 Eth0: IP 192.168.0.203/24 Eth1: IP Y.Y.Y.240/27

Server #5 Eth0: IP 192.168.0.204/24 Eth1: IP Y.Y.Y.241/27

Servers 1-3 are part of the same subnet, so are servers 4-5.

I am trying to find the best way to convert this network setup into docker networks, I want every container to preserve his public IP (the one on Eth1, meaning that traffic generated from the container will keep the same public IP it had on the original server), but also to be able to communicate with every other docker container on the same private net, while also keeping it easily managable and having the least overhead possible.

Would it be possible to mix between a Bridge network and connect every docker container to it, while also having a Macvlan network for each docker container which will bind to a different network interface on host level?

Can I create only 2 network interfaces for the host machine, each for a different subnet, while maintaining the different IP addresses on them (one network interface will consist of 2 IPs, the other one of 3, and each interface will have a corresponding Macvlan docker network)?

Is there a better way to make this work?

unity – Clones size is bigger than the original?

I’m trying to clone items inside array. But the clones i’m getting is bigger than the original one!! Also, after couple of second the script is missing/removing some clones !!

Update 1:
I’m working on puzzle game such as “Candy Crush”. Now I have my items to fill inside sorts”dots”. What script should do, Clone my items and insert them on sort place. What happen is I get the size bigger than the original one. and after seconds script is missing/removing some clones from “Clone_Ghost” !!

public GameObject() Items; // 4 elements
public GameObject() dots; // where to place my clones
public GameObject ClonePlace; // hold inside canvas
public GameObject() Clone_Ghost; //


void Start()
{
    
    Clone_Ghost = new GameObject(dots.Length);

    for (int i = 0; i < dots.Length; i++)
    {           
        Clone_Ghost(i) = Instantiate(Items(Random.Range(0,3)), dots(i).transform.position, dots(i).transform.rotation);
        Clone_Ghost(i).transform.parent = ClonePlace.transform;
    }
    
}

Update:
Add this line at end to solve the scale problem.

    Clone_Ghost(i).transform.localScale = new Vector3(1, 1, 1);

and about the “missing/removing” is because another script that work to destroy matches items.

postfix – Retrieve original bounced email as eml / attachment

When I receive bounce notifications from postfix, the original message appears to be encapsulated as message/rfc822. This is exactly what I want but for some reason it’s not available as an attachment in gmail, but instead is displayed inline as a ------- Forwarded message -------. Is this a shortcoming of gmail or a problem with my postfix configuration?

At the moment I’ve been downloading the bounce notification and then copy/pasting everything between the encapsulation boundary and the closing boundary. This seems a bit messy.

Is there an easy way to extract the original bounced message in eml format?

plotting – Extract original vertex from VoronoiMesh

Consider a set of random coordinates.

pts = SortBy(RandomReal({-1, 1}, {25, 2}), {First, Last});

and find it’s VoronoiMesh,

mesh = VoronoiMesh(pts)

enter image description here

Now for finding faces, we can use mesh("Faces") to list them base on the coordination of mesh vertices

pts2 = mesh("Coordinates");

Which for example you can colored them as follows,

col = RandomColor(Length@pts);
Graphics({{col, Polygon(pts2((#))) & /@ mesh("Faces")}(Transpose)})

enter image description here

But I cannot find in mesh("Properties") any entry related to original vertices, pts.
In other words, I want to relate faces (listed in mesh("Faces") ) to original points (listed in pts)
For example, properties "PointInFaces" gives a point in each face, but it is not the original points,

enter image description here

I can use the following code to relate faces to original points, but it is a time-consuming task, particularly when the number of points is large.

VORfaces = Polygon(pts2((#))) & /@ mesh("Faces");
Flatten@Position(# (Element) VORfaces((1)) & /@ pts,True) // AbsoluteTiming

Another suggestion is by using PointInFaces and Nearest function.

ff = Nearest(pts -> "Index")
originalpointIndex =Flatten(ff /@ mesh("PointInFaces"))

By potting them, you can find an excellent match,

Graphics({{col, Polygon(pts2((#))) & /@ mesh("Faces")}(Transpose), Text(#((1)), #((2))) & /@ ({Range(Length@pts), pts}(Transpose)), White, Text(#((1)), #((2))) & /@ ({originalpointIndex, mesh("PointInFaces")}(Transpose)) })

enter image description here

I am wondering if there are any natural properties to relate mesh coordinate and original vertices?

How do I identify my EarPods lightning are original?

When I connect my EarPods bought along with iPhone from suspect source I can see sth like this:

enter image description here
enter image description here

Is not weird? Are they original or not?

paperwork – Do I need to bring an original birth certificate with me when moving overseas?

I have recently accepted a job offer in Japan and am looking at getting my documents in order for moving there to live.

Do I need to bring my original birth certificate or will a certified copy be sufficient? I would prefer to keep the original safely with my family.