Is it possible to create an RDP session from Windows to a Linux Desktop session so the session will show only a specific program?

Case: I have a Windows host and an Ubuntu VM. I use the VM solely to use some specific software. It could be really useful if I could simply view only that software window, without the hassle of opening the VM Window.

Some person created the reverse effect (from Linux to Windows) using KVM and RDP:
https://github.com/Fmstrat/winapps

Is there a way to create such a thing for Windows?

php – Pegar dados de session com foreach

parece simples, mas estou a noite toda pesquisando e testando diversos métodos e nao consigo nada. Entao, resolvi postar.

Tenho o seguinte item na session:

("prefixo1")=>
  array(6) {
    ("getCreatedTime")=>
    int(1618373454)
    ("getCaption")=>
    string(17) "teste app legenda"
    ("getCommentsCount")=>
    int(3)
    ("getLikesCount")=>
    int(4)
    ("getLink")=>
    string(39) "https://..."
    ("getImageHighResolutionUrl")=>
    string(294) "https://..."
  }

Meu código para teste:

*linha 59:* foreach ($_SESSION as $array) {
*linha 60:* foreach($array as $key => $midia) {
*linha 61:* print "$key : $midia<br>";
*linha 62:* }
*linha 63:* }

Resultado, em tese, parece satisfatório…

getCreatedTime : 1618373661
getCaption : teste app legenda
getCommentsCount : 3
getLikesCount : 4
getLink : https://...
getImageHighResolutionUrl : https://...

Tenho os erros:

Warning: foreach() argument must be of type array|object, string given > **on line 60**
Warning: foreach() argument must be of type array|object, int given > **on line 60**

Para exibir os dados, por exemplo, nao tenho nenhum retorno

$midia('getCreatedTime');

Ainda obtenho o erro:

Warning: Undefined array key "getCreatedTime"

Onde estou errando?

kubernetes – kube2iam and iam-role-session-ttl argument – how can kube2iam set session validity?

kube2iam has a --iam-role-session-ttl argument that defaults to 15 minutes. The description of this option is

Length of session when assuming the roles (default 15m)

I thought the STS credentials provided by the EC2 instance metadata (via assigned EC2 instance profile and IAM role) was ultimately under control of the STS service? As in only it can set credential validity time.

How can kube2iam also set a role session TTL? Or does it mean something different to the credential validity time?

flags – Success with cookie, fails with JWT: RuntimeException: Failed to start the session because headers have already been sent

My Controller is working with cookie auth but failing with JWT. This Controller is supposed to flag an entity for the logged-in user.

If I am using cookie auth, there are no errors and everything works as expected.

But when I try to use JWT, although the entity does get flagged correctly, I get the following error in the Drupal logs:

RuntimeException: Failed to start the session because headers have
already been sent by
“/app/vendor/symfony/http-foundation/Response.php” at line 377. in
SymfonyComponentHttpFoundationSessionStorageNativeSessionStorage->start()
(line 150 of
/app/vendor/symfony/http-foundation/Session/Storage/NativeSessionStorage.php)

How do I fix this error?

Here’s how I’m using JWT auth in Postman:

POST http://example.com/api/group_add?_format=json

Headers:

  • Accept: application/vnd.api+json
  • Content-Type: application/vnd.api+json
  • Cache: no-cache
  • Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpYXQiOjE2MTgyMDk4MjAsImV4cCI6MTYyMzM5MzgyMCwiZHJ1cGFsIjp7InVpZCI6IjI3In19.5uDJMtokLXD6K63H5Ikb-F870EYFMrgE4mItTuTT3bI

Request body:

{
    "entity_id": "14"
}

As for the Controller, here’s MYMODULE.routing.yml:

MYMOUDLE.api_flagging.http:
  path: '/api/group_add'
  defaults:
    _controller: 'DrupalMYMODULEControllerApiFlagging::flag'
  methods: (POST)
  requirements:
    _permission: 'view own commerce_order'
    _format: 'json'
  options:
    no_cache: 'TRUE'

Here’s ApiFlagging.php:

<?php

namespace DrupalMYMODULEController;

use DrupalCoreControllerControllerBase;
use DrupalflagFlagServiceInterface;
use SymfonyComponentDependencyInjectionContainerInterface;
use SymfonyComponentHttpFoundationJsonResponse;
use SymfonyComponentHttpFoundationRequest;
use SymfonyComponentHttpKernelExceptionBadRequestHttpException;
use SymfonyComponentSerializerEncoderJsonEncoder;
use SymfonyComponentSerializerSerializer;

/**
 * Class ApiFlagging.
 *
 * Https://www.drupal.org/project/flag/issues/3091824#comment-13336379
 */
class ApiFlagging extends ControllerBase {

  const FLAG_ID = 'ABC';

  /**
   * The flag service.
   *
   * @var DrupalflagFlagServiceInterface
   */
  protected $flagService;

  /**
   * The serializer.
   *
   * @var SymfonyComponentSerializerSerializer
   */
  protected $serializer;

  /**
   * The available serialization formats.
   *
   * @var array
   */
  protected $serializerFormats = ();

  /**
   * Constructs a new ApiFlagging object.
   */
  public function __construct(Serializer $serializer, array $serializer_formats, FlagServiceInterface $flag) {
    $this->serializer = $serializer;
    $this->serializerFormats = $serializer_formats;
    $this->flagService = $flag;
  }

  /**
   * {@inheritdoc}
   */
  public static function create(ContainerInterface $container) {
    if ($container->hasParameter('serializer.formats') && $container->has('serializer')) {
      $serializer = $container->get('serializer');
      $formats = $container->getParameter('serializer.formats');
    }
    else {
      $formats = ('json');
      $encoders = (new JsonEncoder());
      $serializer = new Serializer((), $encoders);
    }

    return new static(
      $serializer,
      $formats,
      $container->get('flag')
    );
  }

  /**
   * Flagging.
   */
  public function flag(Request $request) {
    $format = $this->getRequestFormat($request);

    $content = $request->getContent();
    $flagData = $this->serializer->decode($content, $format);
    $flag = $this->flagService->getFlagById(self::FLAG_ID);
    $flaggableEntityTypeId = $flag->getFlaggableEntityTypeId();

    $my_goals = NULL;
    if (array_key_exists('goals', $flagData)) {
      $my_goals = $flagData('goals');
    }

    $entity = Drupal::entityTypeManager()
      ->getStorage($flaggableEntityTypeId)
      ->load($flagData('entity_id'));

    if ($my_goals === NULL) {
      return new JsonResponse((
        'error_message' => 'Goals not set.',
      ), 400);
    }

    try {
      /** @var DrupalflagEntityFlagging $flagging */
      $flag->set('field_goals', $my_goals);
      $flagging = $this->flagService->flag($flag, $entity);
    }
    catch (LogicException $e) {
      $message = $e->getMessage();
      kint('error', $e);
      return new JsonResponse((
        'error_message' => $message,
      ), 400);
    }

    return new JsonResponse((
      'message' => 'flag success',
      'flagging_uuid' => $flagging->uuid(),
      'flagging_id' => $flagging->id(),
      'flag_id' => $flagging->getFlagId(),
    ));
  }

  /**
   * Gets the format of the current request.
   *
   * @param SymfonyComponentHttpFoundationRequest $request
   *   The current request.
   *
   * @return string
   *   The format of the request.
   */
  protected function getRequestFormat(Request $request) {
    $format = $request->getRequestFormat();
    if (!in_array($format, $this->serializerFormats)) {
      throw new BadRequestHttpException("Unrecognized format: $format.");
    }
    return $format;
  }

}

asp.net core – Session Scoped Dependency Injection vs Caching in Data Access Layer

The issue at hand is that I don’t want to repeatedly hit the DB to look up user information for the logged in user over the course of the many requests made within a single session.

My first inclination is to use Session Scoped Dependency Injection, however this isn’t possible with out of the box DI features provided by Microsoft. I could install Autofac and that seems like a perfectly reasonable solution to me, however I seem to recall reading why Microsoft didn’t include the ability to do this because they felt it was a bad practice… but I don’t recall what their reasons were at the moment.

Then it occurred to me that the way people at Microsoft would probably handle this is through the fact that Entity Framework would just cache the results. However, I’m not using EF and the solution I’m using does not provide caching though I could easily add some caching to it.

So I’m wondering if I should 1) Install Autofac or 2) Add caching features to my data access layer.

Are there any serious reasons I should avoid 1) (i.e. not avoid installing Autofac generally, but doing it for the purpose of using Session Scoped DI, obviously)? Because I think that’s what I will probably end up doing unless I can find a substantial reason not to.

tunneling – SSH reverse tunnels: can the intermediate server eavesdrop on an SSH session?

Suppose there are three computers: (1) my laptop, (2) a server that has a public static IP address, and (3) a Raspberry Pi behind a NAT. I connect from (1) to (3) via (2) as explained below.

On the server (2), I add GatewayPorts yes to /etc/ssh/sshd-config, and restart the SSH daemon: sudo systemctl reload sshd.service.

On the Raspberry Pi, I create a reverse SSH tunnel to the server:

rpi$ ssh -R 2222:localhost:22 username-on-server@server-ip-address

On my laptop, I am now able to connect to the Raspberry Pi using:

laptop$ ssh -p 2222 username-on-pi@server-ip-address

The question is: is the server able to see the data sent between my laptop and the Raspberry Pi? Can the server eavesdrop on the SSH session between my laptop and the Raspberry Pi?

How can I remove all cookies except session cookies from nginx responses?

I’m serving several WordPress sites via nginx & PHP-FPM. Sometimes plugins randomly set cookies that are unwanted, and that do not have consent. For those, and for privacy in general, I want to suppress all cookies except those that are needed to support admin logins, i.e. session cookies. I don’t know the names, paths or domains of the cookies that are set ahead of time. Essentially if it’s a Set-Cookie header containing Expires, it needs to die.

I’ve seen alternatives where configs set new cookies that have the same names but immediate expiry times, but I don’t want these cookies to ever get as far as the client.

I have looked at the stock nginx config options and that doesn’t seem to be possible – though it’s very easy to set more! The nginx headers_more extension has slightly more power in its more_clear_headers directive, but it won’t unset based on regular expressions, only simple wildcards; I can’t simply search for Expires because that occurs in other headers that are needed.

So I’m wondering if I need to dive into Lua scripting to get nginx to do this, which I have no idea how to do!

Any better ideas how to do this?

dnd 5e – How to make monsters level appropriate during a session?

Kobold Fight Club is an excellent tool for scaling encounters on the fly, and I use it constantly. The main trick in your arsenal is to add and remove enemies, rather than changing the stat blocks.

For example, if I set KFC to account for 4 level 5 PCs, it tells me that 5 gargoyles are an appropriate challenge for such a party, just barely a Deadly encounter. So let’s say your module has such an encounter and your PCs are actually level 6. According to KFC, you’d need to add 2 more gargoyles to scale this encounter appropriately. If they’re level 4, you’d need to scale back to 3 gargoyles.

Another thing you can do is bump the monsters up or down a tier. A lot of the monsters in the game have weaker or stronger counterparts. For example, a Gargoyle is an elemental. I can use KFC quickly to find all the elementals, and it looks like an Earth Elemental is CR 5. So if the PCs are level 7, for example, I might swap out two of the Gargoyles for Earth Elementals. If the PCs are of a level a lot lower, I can try swapping out for Mud Mephits. The tool tells me quickly what number of enemies I’ll need to challenge the party I have.

If you’re finding that your players are having a really easy time with the enemies you throw at them, try just giving the enemies more hit points. For example, a Gargoyle normally has 52 hit points, with hit dice 7d8+21. Just giving them 77 hit points instead will let the gargoyles last another round or so. The beautiful thing is that your players don’t know how many hit points your monsters have, so you can adjust this as late as you want in response to the resources the PCs actually have at their disposal. This obviously works the other way around, too. To make an encounter easier, just take away some of the monsters’ hit points.

linux – Applying systemd control group resource limits automatically to specific user applications in a gnome-shell session

Having seen that GNOME now launches apps under systemd scopes I’ve been looking at a way to get systemd to apply some cgroup resource and memory limits to my browser.

I want to apply a MemoryMax and CPUShare to all app-gnome-firefox-*.scope instances per systemd.resource-control.

But GNOME isn’t launching firefox with the instantiated unit format app-gnome-firefox-@.scope so I don’t know how to make a systemd unit file that will apply automatically to all app-gnome-firefox-*.scope instances.

I can manually apply the resource limits to an instance with systemctl set-property --user app-gnome-firefox-92450.scope (for example) once the unit starts, but that’s a pain.

Is there any way to inject properties for transient scopes with pattern matching for names?

This isn’t really gnome-shell specific; it applies just as well to a user terminal session that invokes a command with systemd-run --user --scope.

Details

Firefox is definitely launched under a systemd scope, and it gets its own cgroup:

$ systemctl --user status app-gnome-firefox-92450.scope
● app-gnome-firefox-92450.scope - Application launched by gnome-shell
     Loaded: loaded (/run/user/1000/systemd/transient/app-gnome-firefox-92450.scope; transient)
  Transient: yes
     Active: active (running) since Wed 2021-03-31 09:44:30 AWST; 32min ago
      Tasks: 567 (limit: 38071)
     Memory: 2.1G
        CPU: 5min 39.138s
     CGroup: /user.slice/user-1000.slice/user@1000.service/app-gnome-firefox-92450.scope
             ├─92450 /usr/lib64/firefox/firefox
             ....
  ....

Verified by

$ systemd-cgls --user-unit app-gnome-firefox-92450.scope
Unit app-gnome-firefox-92450.scope (/user.slice/user-1000.slice/user@1000.service/app-gnome-firefox-92450.scope):
├─92450 /usr/lib64/firefox/firefox
...

and

$ ls -d /sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/app-gnome-firefox-*
/sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/app-gnome-firefox-92450.scope

I can apply a MemoryMax (cgroup v2 constraint memory.max) to an already-running instance with systemctl set-property and it takes effect:

$ systemctl set-property --user app-gnome-firefox-98883.scope MemoryMax=5G
$ systemctl show --user app-gnome-firefox-98883.scope |grep ^MemoryMax
MemoryMax=5368709120
$ cat /sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/app-gnome-firefox-*/memory.max
5368709120

It definitely takes effect – setting a low MemoryMax like 100M causes the firefox scope to OOM, as seen in journalctl --user -u app-gnome-firefox-98883.scope.

The trouble is that I can’t work out how to apply systemd.resource-control rules automatically for new instances of the app automatically.

I’ve tried creating a .config/systemd/user/app-gnome-firefox-@.scope containing

(Scope)
MemoryMax = 5G

but it appears to have no effect.

systemd-analyze verify chokes on it rather unhelpfully:

$ systemd-analyze  verify --user .config/systemd/user/app-gnome-firefox-@.scope 
Failed to load unit file /home/craig/.config/systemd/user/app-gnome-firefox-@i.scope: Invalid argument

If I use systemctl set-property --user app-gnome-firefox-92450.scope on a running instance and systemctl --user show app-gnome-firefox-92450.scope I see the drop-in files at:

FragmentPath=/run/user/1000/systemd/transient/app-gnome-firefox-98883.scope
DropInPaths=/run/user/1000/systemd/transient/app-gnome-firefox-98883.scope.d/50-MemoryMax.conf

It has Names containing the pid, so that can’t be matched easily:

Id=app-gnome-firefox-98883.scope
Names=app-gnome-firefox-98883.scope

and I’m kind of stumped. Advice would be greatly appreciated, hopefully not “gnome-shell is doing it wrong, patch it” advice. Some draft systemd docs suggest it’s using one of the accepted patterns.

Workaround

The only workaround I see so far is to launch the firefox instance with systemd-run myself:

systemd-run --user --scope -u firefox.scope -p 'MemoryMax=5G' -p 'CPUQuota=80%' /usr/lib64/firefox/firefox

and let that be the control process. But it looks like this isolates the firefox control channel in some manner that prevents firefox processes launched by other apps or the desktop session from then talking to the cgroup-scoped firefox, resulting in

Firefox is already running, but is not responding. To use Firefox, you must first close the existing Firefox process, restart your device, or use a different profile.

Edit: firefox remoting when launched manually via systemd-run is fixed by setting MOZ_DBUS_REMOTE in the environment both for my user session and as a -E MOZ_DBUS_REMOTE=1 option to systemd-run. It’s probably because I’m using Wayland.

Still a clumsy workaround – it should surely be possible to apply resource control rules to slices via .config/systemd/user ?

Will VPN help to restrict Meterpreter session?

I’m a victim of Meterpreter attack. I’m now using VPN in all the devices. My question is, will it help me to restrict meterpreter attack?

If not, what is the way out to get rid of meterpreter attack?