tls: what is the relevance in HSTS in HTTP application?

We all know that HSTS should be implemented in HTTPS application. Recently, I came across an HSTS app implemented in the HTTP app.

I need to respond to the client. According to me, HSTS implemented in HTTP would have no advantage. Please let me know.

curl – Why is my collectd curl_json block not working when I change the URL from HTTP to HTTPS and add the name of the certificate file?

I am trying to add a curl_json configuration on a basic collective monitoring server that already uses dbi and mysql. I am trying to access the JSON API of our MediaWiki wikis, which can be done from the command line with curl -s https://hostname/api.php?querystring. With curl I can press http and use -L to handle our Varnish's 301 redirect from http to https, but the collectd curl_json plugin can't handle redirection, so I have to explicitly use https. To do so, I added the CACert parameter but it doesn't connect and I can't tell why. I have used tcpdump and strace extensively. I can see when I use http that collectd will send the HTTP request and fetch 301, but I don't see any request when I use https. If I'm reading the strace output correctly, it looks like it's getting back -1 EAGAIN (Resource temporarily unavailable).

The curl_json.conf block basically looks like this:

LoadPlugin curl_json


  Instance "wiki1"
  CACert "/etc/ssl/certs/ca-certificates.crt"
  
    Type "wiki_pages"
  
  ...


One difference I can see is that /usr/bin/curl Y /usr/lib/collectd/curl_json.so use different libcurl libraries:

# ldd /usr/bin/curl | grep curl
        libcurl.so.4 => /usr/lib/x86_64-linux-gnu/libcurl.so.4 (0x00007fad8e93f000)

# ldd /usr/lib/collectd/curl_json.so | grep curl
        libcurl-gnutls.so.4 => /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4 (0x00007fedc337a000)

# dpkg -S /usr/lib/x86_64-linux-gnu/libcurl.so.4 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4
libcurl4:amd64: /usr/lib/x86_64-linux-gnu/libcurl.so.4
libcurl3-gnutls:amd64: /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4

# dpkg -l | grep libcurl
ii  libcurl3-gnutls:amd64             7.58.0-2ubuntu3.8                   amd64        easy-to-use client-side URL transfer library (GnuTLS flavour)
ii  libcurl4:amd64                    7.58.0-2ubuntu3.8                   amd64        easy-to-use client-side URL transfer library (OpenSSL flavour)

Also FWIW wikis are behind an AWS ALB that uses the default security policy, so I don't think it's a protocol issue.

This runs CollectD 5.7.2-2ubuntu1.2 on Ubuntu 18.04.4.

Any ideas what I might be missing?

javascript – Fetch HTTP wrapper and API

I have written a basic HTTP container around the fetch. then a API service that adds some default options / parameters. the idea is that HTTPClient it's a very basic minimal wrapper that you could use on multiple projects internally and could have APIService to a project. I would like some comment on this. I hope I can extend the HTTPClient to include a consistent error message / may be logging etc.

export class HttpClient {

    public baseUrl = ''; // TODO: Not currently used in request

    public requestOptions(url, options?) {
        return new Request(url, options);
    }

    // Handle response
    // Fetch doesn't distinguish Error / Success
    public status(response) {
        if (response.ok) {
            return response.json();
        } else {
           const error = JSON.stringify({
                status: response.status,
                message: response.statusText
            });
           throw new Error(error);
        }
    }

    // Handle only server errors
    public handleError(error) {
        throw error;
    }

    public get(url?: string, options?: any) {
        const requestOptions = this.requestOptions(url, {method: 'GET', ...options});
        return fetch(requestOptions)
        .then(this.status)
        .catch(this.handleError);
    }

    // options(endpoint: string, baseUrl?: string, options?: any) {
    //     let base = !!baseUrl ? baseUrl + endpoint : this.baseUrl;
    //     let request = this.requestOptions(base, Object.assign({method: 'OPTIONS'}, options))
    //     return fetch(request).then(this.status).catch(this.handleError);
    // }

    public post(url, body, options?) {
        const request = this.requestOptions(url, {body, method: 'POST', ...options});
        return fetch(request).then(this.status).catch(this.handleError);
    }
}
import { AuthenticationService } from '../authentication/authentication.service';
import { HttpClient } from '../http-client/http-client';
import { apiOptions } from './api-options';

export class Api {

    private http;
    private authentication;
    private options = {
        reportVersions: ()
    };

    constructor(
        options = apiOptions,
        httpClient: HttpClient = new HttpClient(),
        authentication = AuthenticationService
    ) {
        this.options = options;
        this.authentication = authentication;
        this.http = httpClient;
    }

    // TODO: Move to HTTP Client
    public URL(url, options: any = {}) {
        //  const getURL = new URL(url);
        if (Object.prototype.hasOwnProperty.call(options, 'searchParams')) {
            return url + this.searchParams(options.searchParams);
        }

        return url;
    }

    public getApiOptions(url, options: any = {}) {
        const reportVersion = this.options.reportVersions.find((version) => {
            return url.match(version.urlRegex);
        });

        if (reportVersion) {
            return {
                ...options,
                searchParams: {
                    ...options.searchParams,
                    reportVersionNumber: reportVersion.version
                }
            };
        }
        return options;
    }

    public addAuthHeaders(options?: any) {
        return this.authentication.authenticate().then((token) => {
            const headers = new Headers({
                Authorization: 'Bearer ' + token.access_token,
                ...options
            });
            return { headers, ...options };
        });
    }

    // TODO: Update the get request to accept options/params
    public async get(url?: string, options?: any) {
        const requestOptions = this.getApiOptions(url, options);
        const requestURL = this.URL(url, requestOptions);
        const authedOptions = await this.addAuthHeaders();
        return this.http.get(requestURL, authedOptions).catch(this.handleError);
    }

    public async post(body: any, url?: string, options?: any) {
        const requestOptions = this.getApiOptions(url, options);
        const requestURL = this.URL(url, requestOptions);
        const authedOptions = await this.addAuthHeaders(options);
        return this.http.post(requestURL, body, authedOptions).catch(this.handlePostError);
    }

    // TODO: Move to HTTP Client
    private searchParams(searchParams): string {
        let query = '';
        if (searchParams) {
            query = Object.keys(searchParams)
                .filter((k) => !!searchParams(k))
                .map((k) => encodeURIComponent(k) + '=' + encodeURIComponent(searchParams(k)))
                .join('&');
        }

        return !!query ? '?' + query : '';
    }

    private handleError = (errorMessage: Error) => {
        const message = JSON.parse(errorMessage.message);
        if (message.status === 401) {
            this.authentication.logout();
        }
        throw errorMessage;
    }

    private handlePostError = (errorMessage: Error) => {
        const message = JSON.parse(errorMessage.message);
        if (message.status === 401) {
            this.authentication.logout();
        }
        throw errorMessage;
    }
}

tomcat – Tomcat9 http NIO connector keeps many outdated connections open

I use tomcat 9 to serve many clients through the HTTP NIO connector on port 8080. Here is my configuration in server.xml :

As you can see, I configure connectionTimeout 30 seconds but Tomcat seems to keep hundreds of stale connections which I'm pretty sure its clients have already disconnected. There may be different problems on the network that cause the client connection to be canceled, but shouldn't you detect and drop these inactive connections? Are there any incorrect settings on my connector? Why is the stage of all these connections set to S or serve? What should I do if I want to force tomcat to not allow any connection for more than 30 seconds in any situation?

Tomcat HTTP NIO connector keeps connections open

html5 – Problem with http request in AngularJS

Trying to make an asynchronous request to display a listing on screen with AngularJS, I run into the problem that the data is not displayed. What could it be? No error appears in console. This is what the list looks like:
enter the image description here

index.html:




    
    AngularJS - http
    
    


    
  • {{post.title}}

    {{post.body}}

controller.js:

angular.module("Practicando AngularJS", ())
    .controller("FirstController", function ($scope, $http) {
        $scope.posts = ();
        $http.get("http://jsonplaceholder.typicode.com/posts")
            .then(function(data) {
                console.log(data);
                $scope.posts = data;
            })
            //.error(function(err) {

            //});
    });

2013 – SP2013: after going from http to https, search shows no results

I went from http to https using certificates, alternative access mapping and IIS module "URL Rewrite" (using these links: Configure SSL for SP2013 and IIS URL rewrite)
But since then my search app has some issues.

What do I have

When I'm on my sandbox site, I try to retrieve some documents like test.xlsx or with the title "XXX_Test" by typing "test" in the search bar.
It redirects me to "https://myWebApplication.MyDomain.com/sites/MySiteCollection/_layouts/15/osssearchresults.aspx?u=https% 3A% 2F% 2FmyWebApplication% 2MyDomain% 2Ecom% 2Fsites% 2FMySiteCollection & k = test "

And I got no results, with the message "There are no items matching your search. Tips: Please try another spelling, etc."

What did I do

Before all of this, I have taken some steps to redirect from http to https. My alternate access assignments are well redirected, https works. Only the search does not.

What did I try?

  1. First, identify if the search services are ok:

To verify that Search Services is working and running, I did the following: go to Central Administration (which remains in http) > Application Management> Manage Service Application> Search Service Application, then I click on "Result Sources" and "Add a Results Source", and then I just click on "Start Query Builder".
When I'm in it, I simply replace {searchTerm} with "test" and hit the "test query" button. My query returns all documents called test_Smtg.doc / xlsx etc … and all documents and folders that contain "test" in their title.

  1. Try something in the url:
    I said earlier that the search bar was sent to "https://myWebApplication.MyDomain.com/sites/MySiteCollection/_layouts/15/osssearchresults.aspx?u=https% 3A% 2F% 2FmyWebApplication% 2MyDomain% 2Ecom% 2Fsites% 2FMySiteCollection & k = test "

Here, I think https is essential. I tried to remove the s from the url parameter. like the following:
"https://myWebApplication.MyDomain.com/sites/MySiteCollection/_layouts/15/osssearchresults.aspx?u=http% 3A% 2F% 2FmyWebApplication% 2MyDomain% 2Ecom% 2Fsites% 2FMySiteCollection & k = test "

And … I got the correct results.

  1. Check in CA:

So I thought maybe my problem came from the Content Source!
I go back to Central Administration to check if my https site is taken into account in the content source of local SharePoint sites:
Content sources> SharePoint local sites> Start addresses:

It's good !

So I don't know what to do to make my search work …

What do I ask you?

Do you have any idea how I could change my search settings to take SSL into account? ?

I'm pretty stumped, because I have to press https in the production environment very soon, and I don't know if everything will be fine until then …

(TL; DR) – After changing various settings (IIS bindings, SP Alternate Access Mappings and IIS URL rewriting) to redirect all my web apps from http to https, my search doesn't work when searching httpS site collections. However, it works on http.

Thank you very much for your time and patience!

Which http caches have full support for http / 2?

Planning a technology stack and searching for scant mentions of http / 2 in the documentation on the latest versions of varnish and squid. Am I missing something or is it really difficult to cache http / 2? What http caches explicitly support http / 2?

htaccess: you configured HTTP (80) on the standard HTTPS port (443)!

I currently have a website that runs under https and it works fine and I tried to do something in my .htaccess file so that when I try http://example.com it will redirect to https: //www.example .com however I have I have tried some of the following items and my redirect is not working: – .htaccess has been enabled with AllowOverride and has been read. – file name no problem. – the location of the rules has been moved to the beginning of the files but it doesn't work – The syntax was tested with the tool available online and was debugged without problems. Below is my .htaccess file for reference:

##
# @package    Joomla
# @copyright  Copyright (C) 2005 - 2018 Open Source Matters. All rights reserved.
# @license    GNU General Public License version 2 or later; see LICENSE.txt
##

##
# READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE!
#
# The line 'Options +FollowSymLinks' may cause problems with some server configurations.
# It is required for the use of mod_rewrite, but it may have already been set by your 
# server administrator in a way that disallows changing it in this .htaccess file.
# If using it causes your site to produce an error, comment it out (add # to the 
# beginning of the line), reload your site in your browser and test your sef urls. If 
# they work, then it has been set by your server administrator and you do not need to 
# set it here.
##

## No directory listings

  IndexIgnore *


## Suppress mime type detection in browsers for unknown types

Header always set X-Content-Type-Options "nosniff"
Header append X-FRAME-OPTIONS "SAMEORIGIN"
Header set Strict-Transport-Security "max-age=31536000" env=HTTPS


## Can be commented out if causes errors, see notes above.
Options +FollowSymlinks
Options -Indexes

## Mod_rewrite in use.
RewriteEngine On
RewriteCond %{HTTP_HOST} ^(^.)+.(^.)+$
RewriteCond %{HTTPS}s ^on(s)|
RewriteRule ^ http%1://www.%{HTTP_HOST}%{REQUEST_URI} (L,R=301)

## Begin - Rewrite rules to block out some common exploits.
# If you experience problems on your site then comment out the operations listed 
# below by adding a # to the beginning of the line.
# This attempts to block the most common type of exploit `attempts` on Joomla!
#
# Block any script trying to base64_encode data within the URL.
RewriteCond %{QUERY_STRING} base64_encode(^()*((^))*) (OR)
# Block any script that includes a 

authentication: in JWT memory for API authentication with HTTP only cookie for sessions?

I've spent some time reading about this, and I know it's a common theme, but I was hoping to get some feedback on my authentication approach.

I have a SPA. Must authenticate to 1) my app backend and 2) some APIs on AWS. I am using cognito to authenticate user credentials.

My idea in addressing this is as follows:

  1. User authenticates through AWS Cognito API
  2. Receive JWT
  3. Keeps JWT in memory only (no local storage – XSS)
  4. Pass JWT to the application backend
  5. Backend sets a secure HTTP-only cookie on the client, STORAGE the JWT within this cookie.
  6. Cookie is used to maintain sessions with the application backend
  7. The in-memory JWT is used to authenticate with the AWS APIs

This is fine, but when the user closes the browser or changes the tabs, he will not have the JWT in memory. However, they will still have the session cookie. So my thought is that it will ask the application server for the JWT (inside the cookie) before pressing AWS APIs.

In this way, I have a secure HTTP-only cookie that maintains sessions with my application server, and I also have the JWT to authenticate with the AWS APIs. If the user has a valid session cookie, it means that he must allow the JWT to be contained within it.

My only concern with this is that it seems a little circular. JWT authenticates to receive cookies, which authenticates in the future to receive an updated JWT. Otherwise I think it seems pretty solid.

Thoughts?