applescript – Can I handle the "open" Apple event inside a bash shell script using the osascript command?

I use a bash script ( inside an application package (, which starts "java -jar myJarFile.jar" with additional JVM arguments (which works fine so far). My goal is to pass a filename as an argument to the application, too, every time the user opens a file via "open with …".

I tried to implement OpenFilesHandler in the Java application and copy an AppleScript-scpt-file into the package that calls without success.

The last thing I tried just to test all the possibilities was to call osascript inside bash


#test: set command line args
MY_TITLE="Launching myJavaJarApp"
ARGS_MSG="command line args: "

osascript <<-EndOfScript
    set arguments to ""
      on open theFiles
            repeat with anItem in theFiles
               set arguments to arguments & space & (quoted form of POSIX path of anItem)
            end repeat
      end open
    display dialog "$ARGS_MSG" & arguments with title "$MY_TITLE"     

It didn't work, the dialog box only indicates $ ARGS_MSG with no arguments, when I open a file with

It seems to me that setting the bash script to CFBundleExecutable "consumes" all AppleEvents.

Or is there some way?

command line: cannot open GNOME Terminal using shortcut

I can't open the GNOM Terminal in my Ubuntu 18.04 LTS version through the menu (after clicking 9 dots in the lower left corner) also using a hotkey CTRL+ALT+T.

Also, I looked in the / bin directory: there is no gnome-terminal file inside of that.

Also, I can't open Software Updater from the menu (after clicking 9 dots in the lower left corner) and on the taskbar (top of the screen) there is a red dot with a white line (symbol type No bother) .

please could someone help me i have seen another similar post related to the inability to open gnome-terminal but i couldn't find what i was looking for please can you help me?

I am using GNOM-Terminal very frequently.

please help.

bash – Linux Gonme: Start multiple terminals and run a command on each

How can I implement the following scenario in Ubuntu Linux?

I want to go to my console, then run "./" and then

1] Terminal 1 appears, start /home/foobar/
1] Terminal 2 appears, start /home/foobar/
1] Terminal 3 appears, start /home/foobar/

I already thought that the command "gnome-terminal & disown" starts a new terminal.
However, until now, I don't know how to execute a command in that terminal.

I accept any answer the full implementation of gives me
or a list of commands that I can use.

Thank you!

command line – Gnome opens inside folders after -config x-terminal-emulator has changed

Hi guys, I like working with the Tilix terminal because of the multiple panels.
I have used

sudo update-alternatives --config x-terminal-emulator

to change terminal and it works fine on desktop but when i try inside a folder it reopens Gnome. Any idea why?

python 3.x – Run the command using `asyncio.create_subprocess_shell` and produce lines of stdout, stderr and finally the return code

My task is to run a thread using asyncio.create_subprocess_shell and render the starting lines. I'm using asyncio to avoid creating threads just to pump the currents.

  • It is necessary to separate the standard lines from the standard lines.
  • Both outputs must be read simultaneously, it is not possible to read all stdout or stderr first, because then the other output buffer in the thread may fill up and it would stop working.
  • This function is incorporated in a gRPC service, which is the reason to adjust the lines produced in instances of cli_pb2.ExecuteReply(...)
  • .readline() it returns "" in EOF, so when I see it, that's when I stop reading that stream.

I want to ask if my approach to reading stdout and stderr simultaneously can be improved. I was looking for something similar to UNIX select () and I found asyncio.wait({...}, return_when=asyncio.FIRST_COMPLETED). Then I had to find out which stream (stdout, stderr) the result corresponds to. I hacked it looking _coro.

import asyncio

import cli_pb2
import cli_pb2_grpc

async def run(cmd):
    proc = await asyncio.create_subprocess_shell(

    stderr_readline = proc.stderr.readline()
    stdout_readline = proc.stdout.readline()
    pending = {stderr_readline, stdout_readline}

    while pending:
        done, pending = await asyncio.wait(pending, return_when=asyncio.FIRST_COMPLETED)
        for f in done:
            line = (await f).decode()
            if f._coro is stderr_readline:  # bad, but it works and can't think of better approach
                if line:
                    yield cli_pb2.ExecuteReply(stderr=line)
                    stderr_readline = proc.stderr.readline()
                    pending.add(stderr_readline)  # put it back into the `pending` set
            if f._coro is stdout_readline:
                if line:
                    yield cli_pb2.ExecuteReply(stdout=line)
                    stdout_readline = proc.stdout.readline()
    await proc.wait()
    status = proc.returncode
    yield cli_pb2.ExecuteReply(status=status)

Example of use

async def printlines():
    async for line in run("echo a; sleep 5; echo b; echo c"):

exits (with a 5 second pause after the first line

stdout: "an"
stdout: "bn"
stdout: "cn"

The definition of protobuf is

syntax = "proto3";

option java_multiple_files = true;
option java_package = "djtests.cli";
option java_outer_classname = "CliProto";

package cli;

service Executor {
    rpc Execute (ExecuteRequest) returns (stream ExecuteReply) {

message ExecuteRequest {
    string cmd = 1;

message ExecuteReply {
    oneof reply_fields {
        string stdout = 1;
        string stderr = 2;
        int32 status = 3;  // sent in the last message

serial port: Linux display command does not accept keyboard input

I am trying to connect to a JTAG device using the screen.

I write the command sudo screen /dev/ttyUSB1 115200,cs8

I can see all the output on the board, but I can't write anything on it. Previously I was able to do well on this same board. Have I screwed up any display settings that are blocking my input? I don't think it's a problem just not echoing the input, because when I type Enter, the commands on the device are not sent or executed.

18.04 – I moved a shared library, now I can't execute any command

I ran sudo mv /lib/x86_64-linux-gnu/ ~, which in retrospect was not the best idea in the world.

Now I can't do anything. Any command I run fails
error while loading shared libraries: cannot open shared object file: No such file or directory.

I can not run aptI can't even do a ls. I can't move the file back with [sudo] mv ~/ /lib/x86_64-linux-gnu/. I can not run ldconfig. Any ideas on how to undo this damage?

A possible relevant fact is that /lib/x86_64-linux-gnu/ also contained the file, which perhaps was linked with somehow?

I don't think it matters, but this is Ubuntu 18.04 running on WSL.

c # – Design pattern: how to inject dependencies into a command pattern

I'm fairly new to programming languages ​​and have only limited knowledge of design patterns, so I hope you can help me with the following problem:

I have an application that operates on a group of different services. One functionality of the application is to provide an interface to the user to call all available service methods. Therefore, I want to use the Command pattern because it allows me to add new commands simply by adding new classes and not changing the existing code. The parameters for each service command are passed to the constructor.


public interface ICommand {
    void Execute();

public abstract class Command : ICommand {
    public T Service { get; set; }

    public abstract void Execute() { /* use service */ }

public class Command1 : Command {
    T1 param1;

   public Command1(T1 param1, ...) { /* set parameters */ }

   public override void Execute() { /* call first service1 method */ }


public class Command2 : Command {
    T2 param1;


   public override void Execute() { /* call first service2 method */ }


The advantage is that the user can create instances of a group of commands without knowing the interface of the application and execute them later when the service was configured. The problem is that I don't know how I can inject the services with elegance.
The application is primarily responsible for starting and stopping the services and keeping an instance of each service in one central place.


public class Application {
    S1 Service1;
    S2 Service2,

    public void StartService(/* params */) { /* ... */ }
    public void StopService(/* params */) { /* ... */ }


So my question is how do I get the correct service within a command?
I thought about using some kind of dependency injection, service locator, or generator pattern, but I never used these patterns and I'm not sure what is the best solution in this case and how to implement it correctly.

Memory problem when sending a large number of emails from the console command

I have a console command that checks orders and sends emails to customers.

protected function execute(InputInterface $input, OutputInterface $output){
    $orders = $this->getOrders();
    foreach ($orders as $order) {
        $data = ... //prepare some data
        $this->sendMail($order->getCustomerEmail(), $data);

private function sendMail($email, $data){
    $postObject = new MagentoFrameworkDataObject();

    if ($this->options("test-email")) {
        $email = $this->options("test-email");

    $maskedEmail = substr($email, 0, 1).'***'.substr($email, strpos($email, "@"));
    $this->output->writeln("tSending mail to: {$maskedEmail}");

    // send mail to recipients
    $storeScope = MagentoStoreModelScopeInterface::SCOPE_STORE;
    $transport = $this->transportBuilder
            $this->scopeConfig->getValue(self::EMAIL_TEMPLATE, $storeScope)
                'area' => MagentoFrameworkAppArea::AREA_FRONTEND,
                'store' => $this->storeManager->getStore()->getId(),
        ->setTemplateVars(('data' => $postObject))
            $this->scopeConfig->getValue(self::EMAIL_SENDER, $storeScope)


This works fine, but I tried it with ~ 5000 orders and in ~ 2700 email it gives me this error:

PHP Fatal error: Allowed memory size of xxxxxxx bytes exhausted (tried to allocate 344064 bytes) in /app/xxxxxx/vendor/magento/framework/View/TemplateEngine/Php.php on line 66
{..rendered email html content..}
Check for more info on how to handle out of memory errors.

I assume that processed html remains in memory.
I tried this unset($transport) but still memory usage increases.

What could be the problem?

Thanks in advance.

The Google Cloud Compute ubuntu growpart command says "it could only grow at -33 [fudge=2048]"

I am trying to add an additional disk to my compute engine by following the instructions here, but I am stuck on growing partition.

sudo growpart /dev/sda 1
NOCHANGE: partition 1 could only be grown by -33 [fudge=2048]

Any help would be highly appreciated!