amazon web services – Understanding AWS CloudFront Cache Policy for dynamic web pages

I am working on an e-commerce website, where users can buy/sell products (the website is dynamic)

I have created the following CloudFront distribution with the following Behaviors:

enter image description here

As you can see I have 2 types of Responses:

  1. Images: I want the images to be served from S3 bucket (I want images to be cached in CloudFront edge locations).

  2. Dynamic Web Pages: I want to forward them to my Web server (Load Balancer)


Now this question is about the dynamic pages, I am not sure about the Cache Policy that I should choose?

enter image description here

Option 1: Since these pages are dynamic, I can choose Managed-CachingDisabled policy

Option-2: I can define my own custom caching policy… I have tested the the website with the following custom policy and it works fine:

enter image description here

With the above custom policy in place, I have inspected the web requests…
I noticed that the dynamic web pages won’t be cached (they all have Miss from CloudFront).., this is because IIS adds cache-control: private to all responses (private means the response can only be cached in web browser)

enter image description here

So both disabling the cache and using my own custom policy works fine… is there any benefit in using either of these options?

tls – Separate SSL certs on CloudFront and Heroku?

I find SSL real confusing, but anyways.

For my frontend static website, I host on S3 and distribute using CF. I just use something simple like comodo SSL to generate an SSL cert and then go through the AWS ACM process to import the cert. All good.

But I realized all my API requests of course go to a Node.js server running somewhere else in the world on Heroku (ok, probably they host on AWS somewhere :P). So with my domain name being www.domain.com and SSL certified, pointing to d12345.cloudfront.net, and API requests hitting myapp.heroku.com, do I get a separate SSL on Heroku? Heroku has a tempting auto SSL feature for paid dynos, but I don’t get if I need it or if I use my existing one for www.domain.com.

magento2 – Cloudfront Redirects to Origin only when Static Content Signing is Enabled

I’m using AWS CloudFront with Magento 2.4.
When dev/static/sign is disabled, I have absolutely no issues with the CDN.
When dev/static/sign is enabled though, all static resources are redirected to the origin url.

I’ve tried this and this, with no success.

Has anyone encountered this?

linux – Is it reasonable to embed Cloudfront IP ranges into an NGINX configuration?

I’m using CloudFront as a proxy for my EC2 instance, so all traffic is routed through CloudFront first.

The issue is that I also need the original client’s IP address for each request to my EC2 instance, so I need to examine the X-Forwarded-For to find out the original IP address (as the default IP my EC2 receives is just the CloudFront server’s)

I found this article which discusses how to find the original client IP when using a middle-man proxy such as CloudFront.

Their proposed solution is to list of all CloudFront’s Edge Server IP ranges in your NGINX configuration, and then read X-Forwarded-For IP’s from right-to-left, and forward the first one that isn’t a trusted IP address (one listed in the NGINX config)

This all sounds well and good, but what if a CloudFront edge server IP range changes, such that I would have to update the NGINX configuration? I really don’t want to have to write some sort of custom script that constantly downloads their ip ranges JSON file, parses it, updates my NGINX configuration file, and restarts NGINX if it has changed. That seems like a lot of work and potential failure points.

I suppose it wouldn’t be much of an issue if I am relatively guaranteed that these CloudFront ip ranges are rarely going to change (as in, perhaps only once every 5 years or so), but I can’t find whether or not there is some sort of guarantee like that, or any individuals reporting their experience with such.

How should this situation be handled?

cache – CloudFront for WordPress site on EC2

I have an EC2 instance, with LAMP Stack. I am following this guide for setting up CloudFront for a WordPress site.

https://aws.amazon.com/blogs/startups/how-to-accelerate-your-wordpress-site-with-amazon-cloudfront/

Here are the settings I've made so far on Cloudfront for this distribution.

General configuration

enter the image description here

Source settings

enter the image description here

Behavioral settings

enter the image description here

enter the image description here

enter the image description here

but both /wp-includes/* & /wp-content/* behaviors that point to the EC2 origin.

Everything is well configured, and the site contains product information with its images stored in the WP database. When I visit the site, most of these images display correctly except 4 5 images, which show only blank gray thumbnails.

What am I doing wrong?

amazon web services: disable Cloudfront caching if file not found

I created a Cloudfront distribution against an S3 container with a RoutingRule to redirect to a lambda function if the requested file is not found. I am using this to resize the images.

Desired flow:

  1. Request the file from Cloudfront
  2. File not found in Cloudfront check S3
  3. File not found in S3 redirects to lambda function
  4. Lambda will find the original file, resize it, and redirect it to the Cloudfront URL.

Set of redirection rules on s3 website:


  
    
      
      404
    
    
      https
      mylambda.execute-api.us-east-1.amazonaws.com
      /?key=
      307
    
  

I have a problem with step 4 when the lambda function redirects to the original url
Cloudfront cached 404? and the S3 routing rule is redirecting back to the lambda function causing a loop.

  1. I confirmed that the lambda function generated the file.
  2. if I invalidate the file in Cloudfront I see successfully that it is served from S3]

I tried adding a 0 TTL to the 404 error page but it didn't help.

Custom error response

redirect rule returns status code 307 [Temporary Redirect]. But I don't know how to set 0 TTL on this. I couldn't find the option on Cloudfront's custom error response page.

enter the image description here

This is a follow-up question about RoutingRules in AWS S3 static website hosting

I appreciate your help.

amazon web services – API Gateway Invoke Endpoint works fine; Cloudfront returns 403

I configured an AWS API Gateway proxy endpoint for a lambda function and implemented it in a stage called auth. The invocation URL found in the API Gateway panel works fine and runs my lambda.

However, I have created a Cloudfront distribution that I want to assign to API Gateway. I set the source in the URL of the API gateway and set a behavior to forward /auth/* to the API Gateway origin:

enter the image description here

Source settings:

enter the image description here

Behavioral settings:

enter the image description here

However, this does not work. I get a 403 banned message from Cloudfront:

enter the image description here

This is extremely confusing to me because I have almost identical settings in another AWS account that works fine. Does anyone know what the problem might be?

Amazon S3 – How to compress static gzip-files in S3 / cloudfront with terraform?

I have a lot of static files for a website. I have used terraform to provision:

and then, once provisioned, I synchronize my static assets in a local dist folder to the S3 cubes with the AWS CLI: aws s3 sync ./dist/ s3://${bucket_name}/.

Now, I want to make sure that (ideally) development and production implementations will send all files (including images) to the end user with gzip compression. Unfortunately, after much google, I can't get a direct answer on how it is supposed to be done. It is not clear to me if:

  1. the Bucket (s) S3 it needs to be configured in some way to serve gzip-ed files, and / or

  2. Is he individual files themselves that need to be gzip-ed locally and configured in the load (through the CLI), and / or

  3. Is he instance in front of the cloud which must be configured to serve assets with gzip compression.

I would greatly appreciate if someone could:

A. Help me get conceptual clarity (for example, "It's number 2. You configure XYZ on S3 cubes doing ABC") and,

B. provide / point out some terraform / aws-cli scripts / commands that accomplish this.

Thank you!

tls: is there a security issue if we don't use SSL between AWS Cloudfront and AWS ALB?

I have an application that is hosted on AWS. It has an Application Load Balancer on the front and is also connected to the Cloudfront to handle a heavy load. In my case, I have enabled SSL only in Cloudfront and I have not had SSL in ALB. Now the application works fine without any problem. Show a secure lock symbol on all browsers. But I had the feeling that communication between CloudFront and ALB is not secure. Anyone who tries to intercept the traffic between them can do it.

Is there any security risk like that? Can anyone intercept traffic or all communications within AWS are secured?

I heard about the AWS ALB SSL download feature where we are not enabling SSL between the ALB and EC2 servers behind it. I thought this case is also applicable between Cloudfront and its ALB origin. Is it correct or is there a security problem if we do not enable SSL between Cloudfront and ALB?

Amazon S3 – How to route to Cloudfront + s3 and rest to ALB on Route53?

The project has a domain name. foobar.com pointing to an inherited system, hosting and serving mainly static files. The project became more complex over time and in AWS ALB certain /routes they point to particularly isolated micro applications (for example, a nodejs API service, a php service such as a cms, etc.); everything that was implemented through AWS ECS. But today, I would like to implement a project that is only static files in s3 + cloudfront (to take advantage of CDN and lower prices) but use the same foobar.com + ALB and route /foobar route to location s3 + cloudfront. I still haven't found a solution and I'm not sure if it is possible to configure it? What options exist to resolve my use case? So, the current state is that the primary domain name points to an A record. ALIAS dualstack-alb.

Cloudfront has the optional CNAME, but as mentioned earlier, the domain name foobar.com points to the legacy project container and only certain /routes elsewhere.

When a new distribution is created, I get a foobar.cloudfront.net domain that I can use to access the static files implemented. But obviously that is not what I want and I would like to use the primary domain name foobar.com and when the /routeX You are asked to point or serve the foobar.cloudfront.net. This is:

foobar.com > shows the legacy project, in ec2 container 1
foobar.com/cms > shows the cms project, in ec2 container 2
foobar.com/myNewProject/ > should show the X project, in s3

The following diagram shows how this works or should work:

enter the description of the image here

After the investigation I have done so far, it seems that I have to point the A domain name record to Cloudfront instead of having the A record pointing to ALB ALIAS dualstack.xxxxxxs-alb-xxxxx and have in front of the cloud /route point to s3. And everything else for him ALB. That's where my question comes from!

I'm just spying on right now and I can't seem to find how to configure /route to s3 and everything else to the alb though! I can see the CNAME option in cloudfront but there are no route rules or anything similar.