amazon web services – Unable to access AWS endpoint despite opening in browser

So I want to perform an operation using rclone on an AWS endpoint. My goal at the moment is to use rclone filtering to access certain objects to be copied to AWS S3. However, when I try running said commands, I get a 504 error within the terminal. I have run the following commands from bash:

> rclone ls s3ceph:braingeneers --include "NOBACKUP" | awk -F"NOBACKUP" '{print $1}' | awk '{print $2}' | tee rclone_excludes.txt 
> && 
> rclone ls s3ceph:braingeneers --exclude-from rclone_excludes.txt | tee output.txt

However, I get the following 504 serialization error:

2021/09/07 20:42:04 Failed to ls: SerializationError: failed to unmarshal error message
        status code: 504, request id: , host id:
caused by: UnmarshalError: failed to unmarshal error message
        00000000  3c 68 74 6d 6c 3e 3c 62  6f 64 79 3e 3c 68 31 3e  |<html><body><h1>|
00000010  35 30 34 20 47 61 74 65  77 61 79 20 54 69 6d 65  |504 Gateway Time|
00000020  2d 6f 75 74 3c 2f 68 31  3e 0a 54 68 65 20 73 65  |-out</h1>.The se|
00000030  72 76 65 72 20 64 69 64  6e 27 74 20 72 65 73 70  |rver didn't resp|
00000040  6f 6e 64 20 69 6e 20 74  69 6d 65 2e 0a 3c 2f 62  |ond in time..</b|
00000050  6f 64 79 3e 3c 2f 68 74  6d 6c 3e 0a              |ody></html>.|

I have tried opening the AWS endpoint in my browser, and I don’t seem to be getting any errors. If anyone has any suggestions as to why this is happening, I would greatly appreciate it. Thanks!

is it possible to migrate AWS EC2 instance to AWS Lightsail

I need to move a couple of EC2 instances to Lightsail. There are some ways of exporting Lightsail->EC2 using snapshot image but cannot find any info on how to do it the other way around EC2->Lightsail.

Is the migration EC2->Lightsail possible?

nginx – Improvements, optimization, security, AWS, how to really build a web server?

Soon we will have some internship guys in the company and we need to put them on really high level activities, the team wants us to put them to do some activities with the web servers, to optimize them, secure them, make them faster, etc. These servers are running several sites with wordpress using nginx, we have the php-fpm cache, query cache, etc., all this in AWS. we need more sysadmin activities for them.
Any ideas?

AWS DataPipeline EDPSession is not authorized to perform: elasticmapreduce:ListClusters

I’m getting this error in DatePipeline Unable to create resource for @EmrClusterForLoad_2021-09-07T20:24:34 due to: User: arn:aws:sts::########:assumed-role/sbtrry-datapipeline-role/EDPSession is not authorized to perform: elasticmapreduce:ListClusters on resource: * (Service: AmazonElasticMapReduce; Status Code: 400; Error Code: AccessDeniedException; Request ID: 6aaf6e88-5df3-416f-8714-29d4c6065e77; Proxy: null)

I tried to add the own policies cited in the documentation https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-iam-roles.html and still get the same error.

Added “elasticmapreduce:*” to both polices

Does anyone have any idea what it could be?

My Police: {
    "Version": "2012-10-17",
    "Statement": (
        {
            "Effect": "Allow",
            "Action": (
                "iam:GetInstanceProfile",
                "iam:GetRole",
                "iam:GetRolePolicy",
                "iam:ListAttachedRolePolicies",
                "iam:ListRolePolicies",
                "iam:PassRole"
            ),
            "Resource": (
                "arn:aws:iam::########:role/sbtrry-datapipeline-role",
                "arn:aws:iam::########:role/sbtrry-datapipeline-role-ec2"
            )
        },
        {
            "Effect": "Allow",
            "Action": (
                "ec2:AuthorizeSecurityGroupEgress",
                "ec2:AuthorizeSecurityGroupIngress",
                "ec2:CancelSpotInstanceRequests",
                "ec2:CreateNetworkInterface",
                "ec2:CreateSecurityGroup",
                "ec2:CreateTags",
                "ec2:DeleteNetworkInterface",
                "ec2:DeleteSecurityGroup",
                "ec2:DeleteTags",
                "ec2:DescribeAvailabilityZones",
                "ec2:DescribeAccountAttributes",
                "ec2:DescribeDhcpOptions",
                "ec2:DescribeImages",
                "ec2:DescribeInstanceStatus",
                "ec2:DescribeInstances",
                "ec2:DescribeKeyPairs",
                "ec2:DescribeLaunchTemplates",
                "ec2:DescribeNetworkAcls",
                "ec2:DescribeNetworkInterfaces",
                "ec2:DescribePrefixLists",
                "ec2:DescribeRouteTables",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeSpotInstanceRequests",
                "ec2:DescribeSpotPriceHistory",
                "ec2:DescribeSubnets",
                "ec2:DescribeTags",
                "ec2:DescribeVpcAttribute",
                "ec2:DescribeVpcEndpoints",
                "ec2:DescribeVpcEndpointServices",
                "ec2:DescribeVpcs",
                "ec2:DetachNetworkInterface",
                "ec2:ModifyImageAttribute",
                "ec2:ModifyInstanceAttribute",
                "ec2:RequestSpotInstances",
                "ec2:RevokeSecurityGroupEgress",
                "ec2:RunInstances",
                "ec2:TerminateInstances",
                "ec2:DescribeVolumeStatus",
                "ec2:DescribeVolumes",
                "elasticmapreduce:TerminateJobFlows",
                "elasticmapreduce:ListSteps",
                "elasticmapreduce:ListClusters",
                "elasticmapreduce:RunJobFlow",
                "elasticmapreduce:DescribeCluster",
                "elasticmapreduce:AddTags",
                "elasticmapreduce:RemoveTags",
                "elasticmapreduce:ListInstanceGroups",
                "elasticmapreduce:ModifyInstanceGroups",
                "elasticmapreduce:*",
                "elasticmapreduce:DescribeStep",
                "elasticmapreduce:AddJobFlowSteps",
                "elasticmapreduce:ListInstances",
                "iam:ListInstanceProfiles",
                "redshift:DescribeClusters"
            ),
            "Resource": (
                "*"
            )
        },
        {
            "Effect": "Allow",
            "Action": (
                "sns:GetTopicAttributes",
                "sns:Publish"
            ),
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": (
                "s3:ListBucket"
            ),
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": (
                "s3:GetObject",
                "s3:PutObject"
            ),
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": (
                "dynamodb:Scan",
                "dynamodb:DescribeTable"
            ),
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": (
                "rds:DescribeDBInstances"
            ),
            "Resource": "*"
        }
    )
}

I’m trying to test a JDBC connection from an AWS EC2 instance to a Centos 7 server running Oracle 12.2.0.1 and the connection is hanging

At some point we are going to upgrade our 12.2.0.1 databases to 19c, so I am using the 19c client on the EC2 instance for the JDBC drivers. When I try connecting to a 19c database it works. I am using Java 1.8.0_291 and the ojdbc8.jar library from the 19c Oracle Client.

Is there a reason why ojdbc8.jar won’t connect to a 12cR2 database? Is there something that
I can add to my listener on my 12cR2 database or to my pom.xml file that will allow this to work. Obviously, I can install a 12cR2 client and test that next. But I would like to understand why this is an issue first. I added a start trace to the Main.java file and compiled it with a stack trace. It takes over 5 minutes before it times out. Even though I reduced the timeout in the Main.java file.

Thanks

aimtiaz11/oracle-jdbc-tester

17:42:15.668 (main) INFO Main - Logger Initialized
17:42:15.678 (main) INFO Main - arg 0 = <secret_username>
17:42:15.679 (main) INFO Main - arg 1 = <topsecretpassword>
17:42:15.679 (main) INFO Main - arg 2 = jdbc:oracle:thin:@//mydbserver:1521/SRVCNAME
17:42:15.971 (main) INFO Main - ****** Starting JDBC Connection test *******
17:42:15.971 (main) INFO Main - Open JDBC connection
17:58:11.578 (main) ERROR Main - Exception occurred connecting to database: IO Error: Connection timed out, Authentication lapse 0 ms.
17:58:11.580 (main) ERROR Main - Sql Exception
java.sql.SQLRecoverableException: IO Error: Connection timed out, Authentication lapse 0 ms.
        at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:874)
        at oracle.jdbc.driver.PhysicalConnection.connect(PhysicalConnection.java:793)
        at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:57)
        at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:747)
        at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:562)
        at java.sql.DriverManager.getConnection(DriverManager.java:664)
        at java.sql.DriverManager.getConnection(DriverManager.java:208)
        at Main.main(Main.java:38)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48)
        at org.springframework.boot.loader.Launcher.launch(Launcher.java:87)
        at org.springframework.boot.loader.Launcher.launch(Launcher.java:50)
        at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51)
Caused by: java.io.IOException: Connection timed out, Authentication lapse 0 ms.
        at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:870)
        ... 15 common frames omitted
Caused by: java.io.IOException: Connection timed out
        at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
        at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
        at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
        at sun.nio.ch.IOUtil.read(IOUtil.java:197)
        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:378)
        at oracle.net.nt.TimeoutSocketChannel.read(TimeoutSocketChannel.java:174)
        at oracle.net.ns.NSProtocolNIO.doSocketRead(NSProtocolNIO.java:555)
        at oracle.net.ns.NIOPacket.readHeader(NIOPacket.java:258)
        at oracle.net.ns.NIOPacket.readPacketFromSocketChannel(NIOPacket.java:190)
        at oracle.net.ns.NIOPacket.readFromSocketChannel(NIOPacket.java:132)
        at oracle.net.ns.NIOPacket.readFromSocketChannel(NIOPacket.java:105)
        at oracle.net.ns.NIONSDataChannel.readDataFromSocketChannel(NIONSDataChannel.java:91)
        at oracle.jdbc.driver.T4CMAREngineNIO.prepareForUnmarshall(T4CMAREngineNIO.java:764)
        at oracle.jdbc.driver.T4CMAREngineNIO.unmarshalUB1(T4CMAREngineNIO.java:429)
        at oracle.jdbc.driver.T4C8TTIdty.receive(T4C8TTIdty.java:736)
        at oracle.jdbc.driver.T4C8TTIdty.doRPC(T4C8TTIdty.java:647)
        at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1702)
        at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:588)
        ... 15 common frames omitted
17:58:11.581 (main) ERROR Main - exit 1

database – A Hit Counter For AWS Python Lambda powered with DynamoDB

I’m learning how to use DynamoDB and various AWS services, so I decided to write a simple hit counter application for my education of how to use DynamoDB. This application deduplicates hits from the same client for the same page by storing a hash of their user agent and ip address. It also stores 1 entry of hits per day, and accumulates them over the time domain. I’m looking for feedback on how I structured the dynamo calls and potential issues that it has. I suspect that I might be using a consistent read when I don’t need to. I’m also curious if I used the right pattern to handle an upsert in dynamodb.

The dynamo db table has a hash key of the “url” and a sort field called “as_of_when” which says roughly when the hits occured.

"""
A simple hit counter stuff to learn the various apis of dynamodb.
"""

import datetime
import os
import hashlib
import base64

import boto3
from botocore.exceptions import ClientError
from aws_xray_sdk.core import xray_recorder
from aws_xray_sdk.core import patch_all
from pybadges import badge

patch_all()

TABLE_NAME = os.getenv("TABLE_NAME", "hit_counts")

dynamo = boto3.client('dynamodb')


def get_today() -> str:
    """
    Gets a formatted date string
    """
    return datetime.datetime.now().strftime("%Y-%m-%d")

def get_full_timestamp() -> str:
    """
    Gets a complete date string
    """
    return datetime.datetime.now().strftime("%Y%m%d%H%M%S")

def get_previous_total(url:str):
    """
    Gets the most recent known total. Should be used before inserting a new instance
    """
    response = dynamo.query(
        TableName=TABLE_NAME,
        Select='ALL_ATTRIBUTES',
        Limit=1,
        KeyConditionExpression="the_url=:urlval",
        ScanIndexForward=False,
        ExpressionAttributeValues={
            ":urlval": {
                "S": url
            }
        }
    )
    if response('Count') == 0:
        return 0
    return response('Items')(0)('accumulated_count')('N')

@xray_recorder.capture('insert_new_entry')
def insert_new_entry(url:str, as_of_when : str, user_hash : str, user_id : str):
    the_count = str(int(get_previous_total(url)) + 1)

    result = dynamo.put_item(
        TableName=TABLE_NAME,
        Item={
            'the_url': {
                'S': url
            },
            'as_of_when': {
                'S': as_of_when
            },
            'today_count': {
                'N': '1'
            },
            'accumulated_count': {
                'N': the_count
            },
            'user_id_hashes': {
                'SS': (user_hash)
            },
            'user_hashes': {
                'SS': (user_id)
            }
        },
        ReturnValues='ALL_OLD',
        ReturnItemCollectionMetrics='SIZE',
        ReturnConsumedCapacity='TOTAL',
        ConditionExpression='attribute_not_exists(the_url) and attribute_not_exists(as_of_when)'
    )
    print('insert_result', result)

    
    return result

@xray_recorder.capture('update_existing_entry')
def update_existing_entry(url:str, as_of_when : str, user_hash : str, user_id : str):
    result = dynamo.execute_statement(
        Statement=f"""
            UPDATE {TABLE_NAME}
            SET today_count = today_count + 1
            SET accumulated_count = accumulated_count + 1
            SET user_hashes = set_add(user_hashes, ?)
            SET user_id_hashes = set_add(user_id_hashes, ?)
            WHERE the_url = ? AND as_of_when = ? RETURNING ALL NEW *
        """,
        Parameters=(
            {
                "SS": (user_id)
            },
            {
                "SS": (user_hash)
            },
            {
                "S": url
            },
            {
                "S": as_of_when
            }
        )
    )
    return result

@xray_recorder.capture('get_todays_entry')
def get_todays_entry(url:str, as_of_when : str):
    result = dynamo.get_item(
        TableName=TABLE_NAME,
        Key={
            'the_url': {
                'S': url
            },
            'as_of_when': {
                'S': as_of_when
            }
        },
        AttributesToGet=(
            'today_count',
            'accumulated_count',
            'user_hashes',
            'user_id_hashes'
        ),
        ConsistentRead=True
    )

    print('get_todays_entry', result)
    if 'Item' in result:
        return result('Item')
    return None


def increment_hit_count(url:str, as_of_when : str, user_hash : str, user_id : str):
    """
    increments a counter instance in the dynamo table
    """
    current_hits = get_todays_entry(url, as_of_when)

    if current_hits is None:
        # Insert new entry
        x = insert_new_entry(url, as_of_when, user_hash, user_id)
        current_hits = {
            'accumulated_count': {
                'N': "1"
            }
        }
    else:
        # Check for existence in existing object
        print(current_hits('user_id_hashes'))
        print(user_id)
        if user_hash not in current_hits('user_id_hashes')('SS'):
            result = update_existing_entry(url, as_of_when, user_hash, user_id)
            if 'Items' in result:
                current_hits = result('Items')(0)
            print(result)

    
    return  current_hits('accumulated_count')('N')




def hash_api_gateway_event(event : dict):
    reqContext = event('requestContext')('http')
    reqString = ':'.join((reqContext('sourceIp'), reqContext('protocol') + reqContext('userAgent')))
    
    m = hashlib.md5()
    m.update(reqString.encode('utf-8'))

    return base64.b64encode(m.digest()).decode('utf-8'), reqString


def handler(event : dict, context):
    """
    # Invoked via query string parameters from an image tag
    # Returns a SVG unless text parameter is set.
    """
    print(event)
    print("hello")

    user_hash, og_user_id = hash_api_gateway_event(event)
    print(user_hash, og_user_id)

    if 'queryStringParameters' in event:
        url = event('queryStringParameters')('url')
        result = increment_hit_count(url, get_today(), user_hash, og_user_id)

        print(result)


    body =  badge(left_text="Total Views", right_text=result)

    return {
        "statusCode": 200,
        "isBase64Encoded": False,
        "headers": {
            "Content-Type": "image/svg+xml"
        },
        "body": body
    }

Connect to VPN within a Windows 10 AWS Workspace

I’m running a Windows 10 AWS Workspace and am wanting to connect to a company VPN from this workspace. I’ve turned off "Use default gateway on remote network". The Windows VPN interface indicates that I’m connected, but I’m not able to access any network resources like files or web servers.

Unsure what to do next.

How many AWS services are there in 2021?

Currently in September 2021, how many cloud services are offered by AWS.

amazon web services – AWS RDS: Invalid max storage size for engine name postgres and storage type gp2: 198

I’m getting this error while modifying DB instance. Invalid max storage size for engine name postgres and storage type gp2: 198 for schedule autoscaling.

This only happened yesterday, it’s weird. Below is the current storage configuration of the DB instance.

config

amazon web services – How to get an AWS role’s friendly name from its ARN?

I am trying to attach a policy to an IAM role in Terraform. I only know the ARN number of the role, not its “friendly name”. But the policy attachment function requires me to use a friendly name, instead of the ARN number.

How can I get the friendly name of an IAM role if I already have the ARN number?

Here’s what I have so far — it’s giving me “ValidationError: The specified value for roleName is invalid. It must contain only alphanumeric characters and/or the following: +=,.@_-“. I believe this is because I am using the ARN number of the role name instead of the friendly role name.

resource "aws_iam_role_policy_attachment" "my-policy-attachment" {
  role       = "arn:aws:iam::my_user_account_id:role/my_role_name"
  policy_arn = aws_iam_policy.my_policy.arn
}

Thanks!