javascript – Aws dynamodb Error Error [UnknownError]: Not Found (aws-sdk)

I’m trying to add elements in a table in dynamodb like in the example below but when I run it I get this error message:

Error Error (UnknownError): Not Found
    at Request.extractError (C:UserscavalDocumentsGitReposexpress-es6-rest-apinode_modulesaws-sdklibprotocoljson.js:51:27)
    at Request.callListeners (C:UserscavalDocumentsGitReposexpress-es6-rest-apinode_modulesaws-sdklibsequential_executor.js:106:20)
    at Request.emit (C:UserscavalDocumentsGitReposexpress-es6-rest-apinode_modulesaws-sdklibsequential_executor.js:78:10)
    at Request.emit (C:UserscavalDocumentsGitReposexpress-es6-rest-apinode_modulesaws-sdklibrequest.js:688:14)
    at Request.transition (C:UserscavalDocumentsGitReposexpress-es6-rest-apinode_modulesaws-sdklibrequest.js:22:10)
    at AcceptorStateMachine.runTo (C:UserscavalDocumentsGitReposexpress-es6-rest-apinode_modulesaws-sdklibstate_machine.js:14:12)
    at C:UserscavalDocumentsGitReposexpress-es6-rest-apinode_modulesaws-sdklibstate_machine.js:26:10
    at Request.<anonymous> (C:UserscavalDocumentsGitReposexpress-es6-rest-apinode_modulesaws-sdklibrequest.js:38:9)
    at Request.<anonymous> (C:UserscavalDocumentsGitReposexpress-es6-rest-apinode_modulesaws-sdklibrequest.js:690:12)
    at Request.callListeners (C:UserscavalDocumentsGitReposexpress-es6-rest-apinode_modulesaws-sdklibsequential_executor.js:116:18) {
  message: 'Not Found',
  code: 'UnknownError',
  statusCode: 404,
  time: 2020-07-05T08:21:43.936Z,
  requestId: undefined,
  retryable: false,
  retryDelay: 38.04958058577693
}

I tried also with getItems, delteItems… and I get the same error. Removing it I don’t get any error, so the error come from these methods for sure.

   let ddb = new AWS.DynamoDB({
      apiVersion: "2012-08-10",
      endpoint: "http://localhost:8080",
      region: "eu-west-1",
      accessKeyId: AWS.config.credentials.accessKeyId,
      secretAccessKey: AWS.config.credentials.secretAccessKey,
    });

    let params = {
      TableName: "namOfTheTable",
      Item: {
        uuid: { N: "123" },
        name: { S: JSON.stringify(req.query.mod) },
      },
    };

    const addMod = ddb.putItem(params, function (err, data) {
      if (err) {
        console.log("Error", err);
      } else {
        console.log("Success", data);
      }
    });

aws – Serverless game server architecture

First post on the software engineering stack exchange, I hope the question fits the purpose of this sub.

I’m building a serverless game server with the following architecture and was wondering if anyone here has already attempted this and if it’s a valid strategy. Also if you’re aware of any caveats I should know of, I will be grateful for any input here.

I’ve run into a problem here which I’m not completely sure how to address.

The infrastructure

I am using is AWS Lambda and SQS

How it works

The core of the game is intended to work as follows:

A game consists of a number of rounds.
Each round has a duration of 10 seconds, has one question and one correct answer.
Each player must have their answer in before the end of the round.

I.e. a game would look like this:
Game started at 10:00:00
round 1 starts at 10:00:00
round 1 ends at 10:00:10
round 2 starts at 10:00:10
round 2 ends at 10:00:20
etc…

My approach:
When all players have confirmed their participation, the game is created.
This will queue up a delayed SQS message that will trigger a lambda function for each round. Delayed by round number * 10 seconds each.

The code of this lambda function will simply round up all the answers that have been entered for its assigned round, assign a score to each and persist that to a DB and die.

The problem

Delayed SQS messages are cool and all, they do exactly what you’d expect, they only appear in the queue after your specified timeout, however, that is currently not a valid trigger for a lambda function. I’m aware SNS does serve this purpose but AFAIK there is no way to delay an SNS.publish() invocation.

What I’m bending over backwards here to avoid is to have a lambda function waiting for X amount of seconds until it publishes an SNS notification.

Are there any solutions that could help me approach my goal?

Thanks in advance

amazon web services – Issue in Migration from GCP to AWS

I tried to migrate a instance from GCP to AWS. I faced a lot of issues during this process. But finally i succeeded in migration.

I used Cloud Endure for migration. As soon as the migration completed, I tried to SSH into the migrated instance but i was not able to SSH into it. Connection was refused for port 22. So, I launched another instance say (test) and detached the volume of the migrated instance and attached to the test instance and mounted that volume as xvdf1 -> /mnt. Then i used nspawn to check what caused the issue. I found that the services systemd-remount-fs cloud-init, iscsid, open-iscsi, cloud-config, lightdm were in failed status.

So, I terminated the nspawn container and copied the /etc/cloud/cloud.cfg.d/95_mirrors.cfg to /mnt/etc/cloud/cloud.cfg.d and once again created the container using nspawn. Now, cloud-init and cloud-config were in active status and remaining in failed status.

Also i checked the ssh status, it was not loaded. I made the below changes:

update-rc.d ssh defaults
systemctl enable ssh.socket
systemctl enable ssh.service
echo /etc/init.d/ssh restart > /etc/NetworkManager/dispatcher.d/10ssh
chmod 755 /etc/NetworkManager/dispatcher.d/10ssh

After this i rebooted the container and now the ssh service and ssh socket are active.

I unmounted the volume detached it from test instance and attached to the migrated instance as root volume xvda1.

I was able to ssh into the server.

But now i am facing two issues whenever i need to update it says

E: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem.

and after this i have to once again run rm /var/lib/dpkg/lock and dpkg --configure -a to install any packages.

And if i try to install any packages say lsof the installation is stucked at

root@ip-192-168-3-237:/etc/cloud/cloud.cfg.d# apt-get install lsof
Reading package lists... Done
Building dependency tree
Reading state information... Done
lsof is already the newest version (4.89+dfsg-0.1).
0 upgraded, 0 newly installed, 0 to remove and 5 not upgraded.
1 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? (Y/n) y
Setting up gce-compute-image-packages (20190801-0ubuntu1~16.04.1) ...

Progress: ( 25%) (#####################################..................................................................)

I am not able to install any package in that migrated instance.

Any possible solution for this?

hadoop – AWS EMR Multiple Jobs Dependency Contention

Problem

I am attempting to run 2 pyspark steps in EMR both reading from Kinesis using KinesisUtils. This requires dependent library, spark-streaming-kinesis-asl_2.11.

I’m using Terraform to stand up the EMR cluster and invoke the steps both with args:

–packages org.apache.spark:spark-streaming-kinesis-asl_2.11:2.4.5

There appears to be contention on start up with both steps downloading the jar from maven and causing a checksum failure.

Things attempted

  1. I’ve tried to move the download of the jar to the bootstrap bash script using:

sudo spark-shell –packages org.apache.spark:spark-streaming-kinesis-asl_2.11:2.4.5

This causes problems as spark-shell is only available on the master node and bootstrap tries to run on all nodes.

  1. I’ve tried to limit the above to only run on master using

grep-q'”isMaster”:true’/mnt/var/lib/info/instance.json ||{echo “Not running on masternode,nothing further to do” && exit 0;}

That didn’t seem to work.

  1. I’ve attempted to add spark configuration to do this in EMR configuration.json

    {

    “Classification”: “spark-defaults”,

    “Properties”: {

    "spark.jars.packages": "org.apache.spark:spark-streaming-kinesis-asl_2.11:2.4.5"
    

    }

    }

This also didn’t work and seemed to stop all jars being copied to the master node dir

/home/hadoop/.ivy2/cache

What does work manually is logging onto the master node and running

sudo spark-shell –packages org.apache.spark:spark-streaming-kinesis-asl_2.11:2.4.5

Then submitting the jobs manually without the –packages option.

Currently, all I need to do is manually start the failed jobs separately (clone steps in AWS console) and everything runs fine.

I just want to be able to start the cluster with all steps successfully starting, any help would be greatly appreciated.

Why is ID token used instead of Access token to get temporary credentials in AWS?

After a user logons to cognito, he receives access and ID tokens. the ID token contains sensitive info like phone number, email, etc..

From all standards – ID token should not be used to gain access to an API: https://auth0.com/docs/tokens?_ga=2.253547273.1898510496.1593591557-1741611737.1593591372

In the backend, to get a session credential (to work with AWS resources) – you typically do this:

identity_id_response = boto3.client('cognito-identity').get_id(
    IdentityPoolId=identity_pool_id,
    Logins={
        provider: id_token #ID token! not access token
    }
)

Then:

provider = f'cognito-idp.{region}.amazonaws.com/{user_pool_id}'

response = boto3.client('cognito-identity').get_credentials_for_identity(
    IdentityId=identity_id_response,
    Logins={
        provider: id_token #access token again
    },
)

Then, you can use AccessKeyId, SecretKey, SessionToken etc..

This is problematic as what if you want to send the ID token to multiple services (via SNS, etc..) so you could perform processing on behalf of the user? you basically send a sensitive token that contains sensitive user data over the backend.

So – it requires an encryption before sending this token, which seems like an overhead.

Any thoughts?

amazon web services – AWS billing for EC2 instance bandwidth

I have two AWS accounts. In the first Account, I have data in S3. And I have an application running on Ec2 instance in 2nd Account. Both are in the same region

I have mounted the s3 of the first account to Ec2 instance in the second account.

I’m able to witness the data transfer in glances if I create or duplicate(copy existing file) any file on mounted s3 which is pretty obvious.

My question is will AWS charge for the data transfer from Ec2 to S3 which are in same region but different accounts.

AWS – YubiKey lost… now what?

I just received my YubiKey and was playing with it. It works brilliantly but I couldn’t help thinking what would happen if I would lose the key. Well, for Google you can print out some backup codes, so that’s covered. But then I wanted to simulate a lost key for Amazon AWS. Apparently there is a page that allows you to troubleshoot your MFA key, see screenshot:

Yubikey login AWS

However, in case of a lost YubiKey, I would now click cancel or press escape. Next, the screen quickly changes and the “Troubleshoot MFA” link is not present anymore! In addition, I can’t click on it when the message pops up.

How am I supposed to recover my account if I can’t click on the link? Anyone with some tips?

optimization – Why a query takes too long in statistics thread state in AWS Aurora MySQL?

The following query execution too long in statistics state and I couldn’t figure out why.

DB engine – 5.7.mysql_aurora.2.07.2

DB Size – db.r5.4xlarge

Sample Query Profile output

+--------------------------------+----------+
| Status                         | Duration |
+--------------------------------+----------+
| starting                       | 0.000023 |
| checking query cache for query | 0.000155 |
| checking permissions           | 0.000009 |
| checking permissions           | 0.000002 |
| checking permissions           | 0.000003 |
| checking permissions           | 0.000002 |
| checking permissions           | 0.000009 |
| Opening tables                 | 0.000035 |
| init                           | 0.000102 |
| System lock                    | 0.000035 |
| optimizing                     | 0.000004 |
| optimizing                     | 0.000003 |
| optimizing                     | 0.000011 |
| statistics                     | 0.224528 |
| preparing                      | 0.000030 |
| Sorting result                 | 0.000017 |
| statistics                     | 0.000041 |
| preparing                      | 0.000013 |
| Creating tmp table             | 0.000023 |
| optimizing                     | 0.000013 |
| statistics                     | 0.064207 |
| preparing                      | 0.000035 |
| Sorting result                 | 0.000025 |
| statistics                     | 0.000098 |
| preparing                      | 0.000018 |
| executing                      | 0.000011 |
| Sending data                   | 0.000007 |
| executing                      | 0.000003 |
| Sending data                   | 0.000251 |
| executing                      | 0.000007 |
| Sending data                   | 0.000003 |
| executing                      | 0.000002 |
| Sending data                   | 0.000526 |
| end                            | 0.000007 |
| query end                      | 0.000013 |
| removing tmp table             | 0.000007 |
| query end                      | 0.000004 |
| closing tables                 | 0.000003 |
| removing tmp table             | 0.000004 |
| closing tables                 | 0.000002 |
| removing tmp table             | 0.000005 |
| closing tables                 | 0.000002 |
| removing tmp table             | 0.000004 |
| closing tables                 | 0.000010 |
| freeing items                  | 0.000050 |
| storing result in query cache  | 0.000007 |
| cleaned up                     | 0.000004 |
| cleaning up                    | 0.000017 |
+--------------------------------+----------+

Query

select xo.ITEM, xo.VALUE
from (
         select pi.ITEM, pi.ITEM_GROUP, pi.VALUE
         from TABLE_2 pi
                  inner join (select max(ps.EXPORTED_DATE) as max_expo, ps.ITEM
                              from TABLE_2 ps
                                       inner join (
                                  select max(pp.EFFECTIVE_DATE) max_eff_TABLE_2, pp.ITEM
                                  from TABLE_2 pp
                                  where pp.EFFECTIVE_DATE <= '2020/07/17'
                                    and ITEM in
                                        ('20', '30', '40', '50', '110', '120', '320', '520', '720', '820', '920', '321',
                                         '275', '221')
                                  group by ITEM
                              ) a on ps.EFFECTIVE_DATE = a.max_eff_TABLE_2 and ps.ITEM = a.ITEM
                              group by a.ITEM) rr on rr.ITEM = pi.ITEM and rr.max_expo = pi.EXPORTED_DATE) xo

         inner join (
    select ea.ITEM, ea.CUSTOMER_ID, ea.ITEM_GROUP
    from TABLE_1 ea
             inner join (
        select MAX(e.EFFECTIVE_DATE) eat_max_eff, e.ITEM, e.CUSTOMER_ID
        from TABLE_1 e
        where e.CUSTOMER_ID = '20'
          and ITEM in ('20', '30', '40', '50', '110', '120', '320', '520', '720', '820', '920', '321', '275', '221')
          and EFFECTIVE_DATE <= '2020/07/17'
        group by e.ITEM
    ) aa
    where ea.ITEM = (aa.ITEM)
      and ea.CUSTOMER_ID = aa.CUSTOMER_ID
      and ea.EFFECTIVE_DATE = aa.eat_max_eff) lo
                    on lo.ITEM_GROUP = xo.ITEM_GROUP and lo.ITEM = xo.ITEM;

Indexes

Table 1

mysql> SHOW INDEX FROM T1;
+-------+------------+--------------+--------------+----------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name     | Seq_in_index | Column_name    | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+-------+------------+--------------+--------------+----------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| T1    |          0 | PRIMARY      |            1 | CUSTOMER_ID    | A         |     3297549 |     NULL | NULL   |      | BTREE      |         |               |
| T1    |          0 | PRIMARY      |            2 | ITEM           | A         |   687374784 |     NULL | NULL   |      | BTREE      |         |               |
| T1    |          0 | PRIMARY      |            3 | EFFECTIVE_DATE | A         |  1314196480 |     NULL | NULL   |      | BTREE      |         |               |
| T1    |          1 | t1_ix_item   |            1 | ITEM           | A         |     2151649 |     NULL | NULL   |      | BTREE      |         |               |
+-------+------------+--------------+--------------+----------------+-----------+-------------+----------+--------+------+------------+---------+---------------+

Table 2


mysql> SHOW INDEX FROM TABLE_2;
+-------+------------+-----------------------+--------------+----------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name              | Seq_in_index | Column_name    | Collation | Cardinality | Sub_T2rt | T2cked | Null | Index_type | Comment | Index_comment |
+-------+------------+-----------------------+--------------+----------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| T2    |          0 | PRIMARY               |            1 | ITEM           | A         |           1 |     NULL | NULL   |      | BTREE      |         |               |
| T2    |          0 | PRIMARY               |            2 | ITEM_GROUP     | A         |       14265 |     NULL | NULL   |      | BTREE      |         |               |
| T2    |          0 | PRIMARY               |            3 | EFFECTIVE_DATE | A         |    63663076 |     NULL | NULL   |      | BTREE      |         |               |
| T2    |          0 | PRIMARY               |            4 | EXPORTED_DATE  | A         |    62464764 |     NULL | NULL   |      | BTREE      |         |               |
| T2    |          1 | t2_ix_item_expo       |            1 | ITEM           | A         |      115823 |     NULL | NULL   |      | BTREE      |         |               |
| T2    |          1 | t2_ix_item_expo       |            2 | EXPORTED_DATE  | A         |    13766454 |     NULL | NULL   |      | BTREE      |         |               |
| T2    |          1 | t2_ix_item_eff_date   |            1 | ITEM           | A         |      115823 |     NULL | NULL   |      | BTREE      |         |               |
| T2    |          1 | t2_ix_item_eff_date   |            2 | EFFECTIVE_DATE | A         |    13766454 |     NULL | NULL   |      | BTREE      |         |               |
| T2    |          1 | t2_ix_item_eff_ig     |            1 | ITEM           | A         |      115823 |     NULL | NULL   |      | BTREE      |         |               |
| T2    |          1 | t2_ix_item_eff_ig     |            2 | EFFECTIVE_DATE | A         |    13766454 |     NULL | NULL   |      | BTREE      |         |               |
| T2    |          1 | t2_ix_item_eff_ig     |            3 | ITEM_GROUP     | A         |    68216912 |     NULL | NULL   |      | BTREE      |         |               |
| T2    |          1 | t2_idx_effective_date |            1 | EFFECTIVE_DATE | A         |       79406 |     NULL | NULL   |      | BTREE      |         |               |
+-------+------------+-----------------------+--------------+----------------+-----------+-------------+----------+--------+------+------------+---------+---------------+

According to this: statistics State in MySQL Processlist

I checked the innodb_buffer_pool_size.

mysql> SHOW VARIABLES LIKE "innodb_buffer_pool_size";
+-------------------------+-------------+
| Variable_name           | Value       |
+-------------------------+-------------+
| innodb_buffer_pool_size | 96223625216 |
+-------------------------+-------------+

In EXPLAIN output rows are minimal (Depends on the Item count in the query. If Item count is 10, the number of rows were 20). Even though the row counts are minimal why the query takes too long in statistics state?

SELLING CLOUD ACCOUNTS FOR 35$ (AZURE / AMAZON AWS / LINODE / IBM / ALIBABA CLOUD) | Proxies123.com

Selling Cloud Computing Accounts

you can get a lot of different virtual machines, domain, loadbalancers, CDN services and many others things.

Azure
AWS
Linode
IBM
ALIBABA CLOUD

if you want any other cloud service, contact me, and i will get for you!
also i offer discount for bulk orders!
also, if you have any idea, to use mass cloud services to earn any money, contact me, we can be partners!

telegram: @dotcomkim

Price: $35
BTC/ETH Only!

 

amazon web services – AWS – Can I use EC2 free tier instance to make use of SES free tier?

Amazon SES Free tier has 62000 free emails per month. I wonder if I create a free tier ec2 instance and use SES on that and be eligible for 62000 free emails per month? I would want to make the EC2 instance as a relay so my other non-aws instance can use the ec2 instance to send email?

Is this allowed or possible? If its possible I guess I would need to purchase a static elastic IP for my ec2 instance as well right?