How does local/remote development server environment work for testing within workplaces?

I am a trainee software engineer and have started working for a small company recently. I have been working on a java project locally on my PC which I pulled from a VCS. I want to test the code where it calls another system to retrieve some data. However, I am unsure as to how this all works in regards to testing in a dev environment. I am WFH and the developers on my team are usually all too busy to answer my questions. I am pretty much left to learn this all by myself with little to no guidance.

I do know we have a remote linux server for dev environment. What I do not understand is how it all works e.g. how it works with loading my code/project to the remote server or can I run the project locally which will in turn be deployed into the remote server? As I do not quite understand how I would be able to work on writing code in the shell without an IDE. I am also new to everything so I am testing if my code has any errors, if all the developers have access to the same remote server then how would I be able to test my code without impacting the other developers work?

Another question, instead of using a remote linux server for development testing, is it possible to set it up so that I am able to run the project locally on my PC instead? I feel I would prefer this route but not sure if it is possible because why do some workplaces use remote server rather than local development testing?

Any help is appreciated!

Traforama – a friendly environment for direct cooperation between advertisers and publishers |

Traforama is a traffic marketplace created as a perfect environment for direct cooperation between advertisers and publishers

Now, you’re completely released from all needless intermediaries in the link because Traforama was created especially to take care of it.

Traforama offers direct access to high-end traffic sources for all geos and verticals.

Every advertiser can cooperate with the owners of top-quality websites and directly communicate with them using the pre-built “messenger” function.​

Another outstanding benefit of the platform is the versatility of choosing your ad placements. Feel free to pick up specific desired websites and even spots for your promotions. It is a quick and easy way to reach out to your audience because you have full control over your advertising spaces.​

You don’t need any intermediaries anymore to launch an Ad Campaign at a high-quality website.​

Traforama gives you full access to the list of the top publishers. You can browse all websites and spots, look at their statistics, filter them by parameters that are important for you, and start a private chat with the owner of the site you’re interested in.​

Have you already worked directly as an advertiser or publisher?

Great news for you! You may invite your familiar publisher to Traforama and start cooperating with him via our platform without any fees. Conclude deals with Traforama – it is simple, safe, and free!​

Some important facts that could be interesting for you as an advertiser. Because Traforama can be used as a DSP for buying high-quality traffic at a low cost too!​

  • Mainstream & Adult streams for major offer verticals: Gambling & Betting, Nutra, Men’s health & Male Enhancements, Video Streams, Adult games, Dating, Sweepstakes, 1-click flow.
  • Direct chats between advertisers and publishers available via the in-built “messenger” feature.
  • Supported Ad Formats: Popunder, Banner, Interstitial, In-video, In-page push.
  • Top-quality worldwide traffic via CPM payment model
  • Convenient payment methods. (5$ min deposit for Paxum)
  • Flexible targeting options & API ad management. Target by location, IP, ISP keywords, add extended traffic filters to get a perfect audience for your campaigns.
  • Friendly & Responsive customer support

Post automatically merged:

Welcome to the official thread about Traforama.
Here’s we’ll discuss all news and updates about the Traforama platform.

Developers of Traforama will take part in all conversations and will be happy to answer any platform-related questions.

Feel free to share your thoughts about the functionality of Traforama. We value feedback from our users and will be glad to implement your ideas!

8 – Querying all websites in a multisite environment

We’re running a D8 multisite environment (same codebase, different databases) with +-50 websites on them. We would like a dashboard with an overview to see some basic stuff:

  • GTM tag of every site
  • Users of every site

I’m looking for the best way to query through all the installed websites in a generic way: if a new website enters the multisite, there data automatically gets added to the dashboard.

This can be a custom module that is only accessible by the admin in every site. Any suggestions?

autoconf – How to portably set an environment variable in autotools /

How can my portably set an environment variable?

I want my configure script to set an environment variable. On linux and MacOS this would be:

export MYVAR=somevalue

On Windows it is (presumably, I don’t use Windows):

setx MYVAR "somevalue"

How to express this in the script? Is there an AC_* macro that outputs the command for the current OS?


8 – How State is specific to the specific environment, when its actually pushed to all environments with DB sync?

When you copy your development/testing database to your production
database, you’re effectively replacing your production site’s state
with your development site’s state. Therefore if you put your
production site in maintenance mode, then import a database that says
the site is not in maintenance mode, then your site will stop being in
maintenance mode.

Conceptually, state is specific to an environment, so long as you
aren’t copying your database when you push changes from development to

Your solution may be to avoid doing these database imports altogether.
If you are just making configuration changes on your local development
site and just need to push those, you can export the site’s
configuration to code, commit it to version control, push that config
to production and then import it. (See for info.)

If you are creating content in the local development environment that
you need to push to production, then you may want to look at using the
Migrate API.

If you absolutely must push these database updates but want the
production site to remain in maintenance mode when you do it, then
you’ll need to put your development site into maintenance mode before
copying its database to production.

design patterns – Prove that Feature Flags turned off Items in Production Environment

We want to implement Feature Deploy Flags, so Development Application Environment will have a new Product feature Toggled on. And Release and Production Environment Webpage, it is turned off.

Its toggled through appsettings.json file.

What Are Feature Flags?

Our clients, do not like Feature Flag, and want us to Different Source Control Git Branches. The argument is, “We Cannot Ensure, the Features are turned off or toggled off in the Production environment. How do you know there is not new code leakage?”

Well we took screenshots of our APIs not working in Swagger/Postman, additionally showed “Page is not Found in Webpage”, when browsing to new feature webpage.

What else can we do to ensure, Deploy Flag Feature are turned off? How would someone prove this?

It will be more confusing for developers to create New Source Control Branches for every toggle, etc.

Developing with an application using oauth (bypass in dev environment)

I guess the answer to this is more of a personal opinion than anything but I am using Azure active directory to validate users in my web application. This obviously can be a pain as I have to log in to my Microsoft account to start working on my development environment as well. I’m wondering what works for other developers to avoid wasting time? I suppose I could make a back-door (or bypass) and not include it in my git repo but then this wouldn’t be usable with other developers on my team.

Best strategy for building a stage environment, replicating production in MongoDB atlas?

Our company is using MongoDB Atlas, the database as a service, atlas search and Realm functions. Works great, with our production cluster being M10. Our objective is now to setup a stage environment, that should work as a mirror of production to test “everything” before deploying to production (i.e. stage should have read/write). I would like your input on how to effectively replicate the production data? I imagine several different ways:

  1. Using the backup/restore MongoDB Atlas REST API’s – maybe some backup/restore?
  2. Using mongorestore and mongodump, with a custom script for realm and atlas search.
  3. Write a “custom” Node.js script for everything, copying the data from production to stage.

No matter the solution, I imagine this should run on a schedule, or every time stage is rebuild?

Thanks for your input – would be great to hear your opinion on this one 🤔

configuration – Config Error for Curl / LibCurl when working with rpy2 and pybrms in Conda environment

I’m working with Ubuntu 20.04 using Plasma KDE Desktop.

I’m setting up a data analysis and ML/DL environment using Miniconda3 where I can use both R and Python via rpy2. Specifically, I use pymc3, pystan, bambi libraries to build advanced Bayesian models. I recently found out about pybrms library that allows me to call brms R package that simplifies Bayesian model development using Stan probabilistic language.

After I created the new environment with python==3.6 (a rpy2 requirement, I installed the following:

conda install -c r r-base # version 3.6.1 of R

conda install -c r rpy2==3.1.0 # pybrms needs rpy2==3.1.0 or higher

pip install pybrms

I had corresponded with the developer of pybrms Adam Haber via the library github site after I ran into some issues. I used to the code below after successfully installing r-base, rpy2, and pybrms

import rpy2.robjects as robjects

import rpy2.robjects.packages as rpackages

from rpy2.robjects.vectors import StrVector

utils = rpackages.importr(“utils”)

utils.install_packages(StrVector((‘rsconnect’, ‘brms’))) # this command installs brms and its dependencies

This is where all things go haywire: brms has dependencies that are curl-dependent and independent of curl. All packages independent of curl and compiled and installed.

curl failed the first time with the following error:

Found pkg-config cflags and libs!

Using PKG_CFLAGS=-I/usr/include/x86_64-linux-gnu

Using PKG_LIBS=-lcurl

————————- ANTICONF ERROR —————————

Configuration failed because libcurl was not found. Try installing:

deb: libcurl4-openssl-dev (Debian, Ubuntu, etc)

rpm: libcurl-devel (Fedora, CentOS, RHEL)

csw: libcurl_dev (Solaris)

If libcurl is already installed, check that ‘pkg-config’ is in your

PATH and PKG_CONFIG_PATH contains a libcurl.pc file. If pkg-config

is unavailable you can set INCLUDE_DIR and LIB_DIR manually via:

R CMD INSTALL –configure-vars=’INCLUDE_DIR=… LIB_DIR=…’

Solution 1
So I tried to install libcurl4-openssl-dev using sudo apt install libcurl4-openssl-dev.
I got the following message:

libcurl4-openssl-dev is already the newest version (7.68.0-1ubuntu2.2).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

Searching on Stack Overflow users suggested that curl needs to be manually compiled if R in a conda environment is unable to recognize libcurl4 ( )

So I used the code given there (I’m a beginner with Linux / Unix env)

Solution 2





make install

In the new environment in miniconda3 where I installed rpy2 I checked that the file libcurl.pc was present in the pkgconfig folder and it is, but I’m unable to install curl.

Solution 3 – Since rpy2 and pybrms were in a conda environment i tried both of these steps:

conda install -c anaconda curl & conda install -c anaconda libcurl successfully

I tried to install curl again using rpy2 and pybrms – The same Config Error came about

Solution 4

I removed libcurl4-openssl-dev and reinstalled it and tried to install curl again and I failed.

What am I doing wrong? What else can I do? Thanks in advance.

As an Ubuntu newbie, this forum has been an invaluable resource – Thank You!!!


webserver – Build and execute code on a sandboxed environment?

Numerous websites allow us to build and execute C code from web browsers (,,…). For my own application (education purposes) I would like to do the same on my web backend.

My current solution is to use an Alpine Docker container with gcc constrained with ulimits. To avoid mounting files I simply use gcc in stdin/stdout with :

protected $container = "frolvlad/alpine-gcc";
protected $cc = "gcc";
protected $cflags = "--static -std=c99 -Wall -pedantic";

protected $ulimit = (
    'locks' => 10,
    'sigpending' => 100,
    'rttime' => 1,
    'nproc' => 1,
    'nofile' => 50,
    'msgqueue' => 1000,
    'core' => 1,
    'cpu' => 2,
    'fsize' => 1000000,
    'memlock' => 1000,

protected function execute($cmd, $args=(), $stdin=null, $env=())
    $descriptorspec = (
        0 => ("pipe", "r"),  // stdin
        1 => ("pipe", "w"),  // stdout
        2 => ("pipe", "w"),  // stderr

    $cwd = '/tmp';
    $ulimits = $this->getUlimitOptions();
    $docker = "docker run --stop-timeout 1 -i --rm $ulimits $this->container";

    $process = proc_open("$docker $cmd", $descriptorspec, $pipes, $cwd, $env);

    if (is_resource($process)) {
        if ($stdin) {
            fwrite($pipes(0), $stdin);

        $stdout = stream_get_contents($pipes(1));

        $stderr = stream_get_contents($pipes(2));

        $exit_status = proc_get_status($process)('exitcode');

    return (object)(
        'stdout' => $stdout,
        'stderr' => $stderr,
        'exit_status' => $exit_status

public function build($code, $args=())
    return $this->execute("$this->cc -xc $this->cflags -o/dev/fd/1 -", $args, $code);

The execution is done the same with

public function run($executable, $args=())
    return $this->execute("cat > a.out && chmod +x a.out && timeout 1 ./a.out", $args, $executable);

Would this solution be secure enough and what would be the possible improvements?

Of course, the backend API is throttled, and only authenticated users can access the build interface.