mysql – What is the name of the process of creating a new DB from an exported snapshot?

I need to set up an automated process that:

  1. creates a snapshot of an AWS RDS (MySQL) DB
  2. uses that snapshot to create a new RDS instance that will be used as an “analytics playground”

I’m looking for the proper name to call the process described in step 2 above. So I ask: what would the DBA community call the process of using a DB snapshot (export) to create a brand new database? Materializing? Restoring? Backing up? Something else?

hardening – How can I improve my sheep dipping process?

Problem:

The hiring department occasionally sends me Word documents asking to clear the file as “safe” to open and review for purposes like resumes etc.; they can come from anywhere and are often unsolicited job applications.

Based on the cornucopia of Word exploits out there and my relative inexperience I get a slight “nnngg” feeling every time I save one to my on-network machine to “check”. I feel like I will be held responsible if I say something is safe, but realize pros commonly use free and paid malware scanners to make sure their exploits will pass scans. Before I took the role of “toothless cybersecurity champion” they essentially opened everything and hoped the MSP caught/stopped the bad stuff.

My “checking”:

  1. Save file as to desktop from Outlook.
  2. Upload file usually to Virustotal and Jotti’s Malware Scan
  3. Scan with the local endpoint scanner.
  4. If all scans negative: tell them to view the document but never click “Enable Editing, Enable Macros etc.” (So far I can only assume they follow this advice but have no method of confirming).
    Yes I realize this scanning could probably just be done by the receivers at this point, but this would only apply to documents of a non-sensitive nature, since I’m already disclosing them to VT and Jotti’s

My improvement ideas:

  • Don’t accept unsolicited Word docs (hard to enforce with high business impact, was actually laughed at for this).
  • Make a VM/off-network machine and send/open everything there first and monitor/binwalk (high time commitment with questionable success).

Question:

Are there any other obvious (to someone with more experience) ways I could improve my sheep dipping process?

development process – What’s the difficult about developping ML applications?

Many software projects have started to incorporate machine learning models based features. And now more than ever, there is a lot of applications that use machine learning.

I read that developing these applications that use ML techniques is more tricky than “classic” software applications. What makes it more difficult?

I could guess that the need for trying multiple models, multiple parameters and keeping track of all of those can make this activity more difficult, but what else makes it challenging (if you think it is)?

stored procedures – Process of updating a record

Help determining the process of updating a record.

Coming from an ORM background with Entity Framework this was the process:

  1. Retrieve field from the database and populate entity.
  2. Access the entity object and change the fields you want to update
  3. Save the entity to the database context, and in the case of Entity Framework, it will go through all your entities, and compare them with their original values, and if any has changed, it will execute an update statement.

What is the approach like when not using an ORM like EF?
I have classes that are a 1:1 mapping to my db schema, so should I follow this same approach when trying to update a record via stored procedure?

I have crafted functionality of accepting an xml parameter in a stored procedure, where I can optionally pass in any piece of data I want for the update, so in reality, I don’t have to get data to conform to one of the aformentioned classes for the update…

For example, if I had a users table with firstname and lastname and I only had the need to update firstname. I could just pass firstname to the stored procedure to update.

Terminating a Docker container with 1 process in S6 Overlay takes > 10 sec

I’m frustrated at the time it takes my container to shut down when using the S6 overlay service. As far as i understand, s6 should run as PID 1 and should issue the SIGTERM to all children processes (postfix) when you issue docker stop. I confirmed it is running as PID 1 but still takes 10sec to stop. I tried with Tini init system and it closes instantaneously. What am i doing wrong here?

Dockerfile

FROM ubuntu:latest

# Add S6 Overlay
ADD https://github.com/just-containers/s6-overlay/releases/download/v2.2.0.1/s6-overlay-amd64-installer /tmp/
RUN chmod +x /tmp/s6-overlay-amd64-installer && /tmp/s6-overlay-amd64-installer /

# Add S6 Socklog
ADD https://github.com/just-containers/socklog-overlay/releases/download/v3.1.1-1/socklog-overlay-amd64.tar.gz /tmp/
RUN tar xzf /tmp/socklog-overlay-amd64.tar.gz -C /

ARG TZ=America/Denver
RUN ln -snf /usr/share/zoneinfo/${TZ} /etc/localtime && echo ${TZ} > /etc/timezone

RUN ("/bin/bash", "-c","debconf-set-selections <<< "postfix postfix/mailname string test.com"") && 
    ("/bin/bash", "-c","debconf-set-selections <<< "postfix postfix/main_mailer_type string 'Internet Site'"")

RUN apt update && 
    apt upgrade -y && 
    DEBIAN_FRONTEND=noninteractive apt install -y --no-install-recommends 
        postfix && 
    apt -y autoremove && 
    apt -y clean autoclean && 
    rm -drf /var/lib/apt/lists/* /tmp/* /var/tmp /var/cache

ENTRYPOINT ("/init")
CMD ( "postfix", "start-fg" )

Build the Image: docker build -t test .

Run the Image: docker run --name test --rm -d test

crash – Need help with a new system that keeps crashing. A process labelled “web content” is using 49 percent memory

I found from older posts that I can use top to check and see what process are using to much CPU and memory. I did this and something stood out to me – there is a process with the the command label “Web content” that is constantly using about 49 percent of the memory.

This is my first few days of using ubuntu but this seems a bit crazy high to me but I did want to confirm that this is not normal behavior. Top also shows firefox-bin to be using about 7 percent memory, so I was a bit more confused by that. Does that mean “web content” is something entirely different?

I wanted to look into this because the system keeps getting random freezes every now and then, and sometimes every step of the way when I am trying to do something. Like if I want to just switch from Firefox to vs code, the system just freezes. Sometimes it becomes basically a brick?

Software:- ubuntu 18.04.02
processor – Intel i3
ram – 8 GB
software recently updated – yes

c++ – It is possible to process data on relay servers?

I have a small game engine done in Java, and re-creating it on C++. While re-creating it i’ve decided on adding basic p2p online multiplayer. However, after reading more about networking, there are multiple models, and the Client-server host with relay model seems attractive, since it overcomes most of NAT issues of port fowarding and does not expose players public IP.

My point is – can a relay server be used to do some minimal data processing? The goal is not to use a dedicated, full featured server application, but just do minimal calculations on the data to detect some basic cheats.

For example: Preventing internal speed hacks by sending the system timestamp every 60 cycles, and then the relay would store the previous time stamp and compare the next one, if the time difference is less than 1000ms, disconnect the player.

pathfinder 1e – Can a simulacrum be healed by any means other the ‘complex process’ outlined in the spell?

Thanks for contributing an answer to Role-playing Games Stack Exchange!

  • Please be sure to answer the question. Provide details and share your research!

But avoid

  • Asking for help, clarification, or responding to other answers.
  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.

To learn more, see our tips on writing great answers.

What is the process of implementing a supervised machine learning feature in a software product?

For context, I’ve got a heavy background in software development and am skilled with reporting tools, databases, and SQL, so I have a solid understanding of a dataset. I also generally get the idea that you can train a machine learning model by providing it a training set of input data matched with output data in as “supervised learning”, and then ask it to predict the output data of new input data not in the training set.

Let’s say I have a number of problems that I think can solved via machine learning, with two examples being:

  1. Lateness Predictability – There’s a task in my system where a user has to create a document. Documents may take a day or may take 10 days depending on the ask of the specs of the document. There are probably some other obvious factors that come into play such as the experience of the person who has created these documents before. This task is on the critical path of a larger set of steps and it would be very helpful to look at historical data to determine whether this task may be completed on time or will be late.

  2. Iteration Predictability – Let’s say once that document is done it goes through reviews, and it would be helpful to predict how many iterations that may take. Again, factors like document spec complexity, who is reviewing, and who the creator are come into play.

How do I go about designing and building a product feature which allows me understand what the likely outcome is of these problems (for 1, what’s the likelihood that this will be late, and for 2, what’s the likely number of iterations) as well as identifying what the largest factors are that affect that prediction (i.e. this task is likely going to be late because this person has almost never completed this task as fast as you expect it to be).

I have a loose idea that the process would involve:

  1. Building a training set of data for each of these problems.
  2. Running a process to have the machine ‘learn’
  3. Taking the output of that (which is a model?) and any new problems run through this model.
  4. Looking at the output model and determining what the highest weighted factors are in determining the outcome?

I have a hard time understanding the tactical steps here to take, for example:

  • What kind of skillset is needed to do step 1 and 2 in the process? Do I need to find someone who is highly skilled in stats to identify the key variables to feed in as the input dataset? Or do I gather all the input data I can find and let the machine learning process identify the right input variables?
  • How do I take that output and implement it in a product?
    • Can I take the model output and is this just a static formula that I can implement in my software, or is that not possible?
    • If I want to have the machine continually learn and course correct itself, is that just a formula that dynamically updates the model as it finds new inputs/outputs that let it refine its model?
  • How do I extract the ‘factors’ of why something was predicted as such?

I’d love to hear about a practical example of how someone took a machine learning problem and implemented it in a web application, for example.

design process – UX & Agile: What criteria could change the complexity of a UX user story?

I’m part of a cross functional dev team, we are applying Agile and also we are trying to adopt UX. Now, we have a difficult moment to estimate the UX user stories. What criteria can be involved to estimate the complexity points? Also I’m interested in listening to other similar experiences (UX Designers that apply Agile).