tls – PCI & e-banking sensitive information tokenization/encryption/obfuscation – Which fields require to be secured?

According to PCI standard all businesses that store, process or transmit payment cardholder data must be PCI Compliant.

Taking into account that we are talking about a bank, fields like card number and card holder’s name should be obfuscated when displayed on the screen.

  • What about the URL? Is it acceptable to have a request like https://myBank.com/cards/012345678901234/payment-third-party where the card number appears as a path variable?
  • What about the case the card number appears on the request payload {cardNumber: "012345678901234"}. What about the Network Tab (F12), is it acceptable for the fields to appear raw there, while they appear obfuscated on the screen?
  • Regardless of the PCI standard, should the e-mail, tax id, physical addresses appear raw on screen? For security reasons, since this is sensitive information, they should be at least obfuscated. What about the Network Tab (F12)? Is it for the fields to appear there raw?
  • Are there any other regulations for banks, or security “tips”?

Thank you a lot in advance.

encryption – Just found out about luksDump in LUKS (cryptsetup), Is there a way to hide all these sensitive data like used cipher and hash?

I have been using cryptsetup for a while now and just found out about the option ‘luksDump’. I am used to similar tools like veracrypt and truecrypt and I am a bit shocked that all these vulnerable data are so easily accessible. If I remember it right, vera- and truecrypt made it impossible to find out cipher and hash with just a small command.

Is there a way to hide these data?
Thank You!

htaccess – sitemap.xml and serviceworker.js are location-sensitive files. But sensitive to the location in the request path or to the actual filesystem location?

I really like the idea of using .htaccess rewrite rules in combination with the /.well-known/ folder for keeping my webspace tidy and coherently / consistently organised.

For instance, I know that webcrawlers (and humans)

  • will look for my robots.txt in the root folder; and
  • will look for my security.txt in the /.well-known/ folder.

This means that files which ought to be near each other would, conventionally, be separated.

The Setup

But I also know I can rewrite requests using .htaccess, such that I can rewrite a request for:

  • /robots.txt to /.well-known/protocols/robots.txt
  • /.well-known/security.txt to /.well-known/protocols/security.txt

Great. Now all the protocols:

and maybe even:

can live together.

But what about sitemap.xml and serviceworker.js ?

So far, so good.

But what if, analagously, I want to have something like:

  • /.well-known/sitemaps/sitemap.xml
  • /.well-known/serviceworkers/serviceworker.js

I know I can use .htaccess to rewrite requests for /sitemap.xml and /serviceworker.js, but I also know that these files are location-sensitive.

That is, the directives in each of these files are only supposed to apply to files:

  • in the same folder; and
  • in subfolders of that folder

See:

The location of a Sitemap file determines the set of URLs that can be
included in that Sitemap. A Sitemap file located at
http://example.com/catalog/sitemap.xml can include any URLs starting
with http://example.com/catalog/ but can not include URLs starting
with http://example.com/images/.

Source: https://www.sitemaps.org/protocol.html#location

and:

The service worker will only catch requests from clients under the
service worker’s scope. (…) The max scope for a service worker is
the location of the worker.

Source: https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API/Using_Service_Workers

But what is the sensitive location in this context?

Is it the “location” as it appears in the filepath request, or is it the actual filesystem location?

Context Sensitive Grammar for the language {a^n+1 b ^n c^n-1 | n>=1}

I have been trying to find a context sensitive grammar for the language {an+1 bn cn-1∣n≥1} for some time but I cannot get it done. Any ideas ?

Ruggedizing UG11 and other humidity/oxidation sensitive filters

I am needing to use Schott UG11 filters (and other coloured glass filters, of which UG11 is the most sensitive) that degrade on a UAV setup. This is an integrated assembly with photodiodes and not something like a camera where the filters can be easily removed and maintained and installed each time before use.

Do any coatings or treatments exist for situations like this? From where I stand it’s almost as if Schott UG11 and similar filters can’t be used anywhere outside of a lab or camera kit where they are constantly babied.

image – How to secure container to prevent sensitive information leak?


Your privacy


By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.




attacks – Is this hypothetical system dealing with sensitive keys secure?

I’m a developer in the cryptocurrency space, dealing with private keys (PK) linked to wallets containing money and I’m interested to see if this system I plan to use is secure or if I’m missing something. I define secure as the chance of the PK being obtained by a bad actor being extremely low/negligible. Is this system secure or is there something I need to do to make this more secure?

Computers:

  1. (PC1) Laptop. Was at one point connected to the internet but will be reformatted, then will probably boot into some Linux distribution through USB like Tails OS and potentially be Air Gapped.
  2. (PC2) Development PC connected to internet. Won’t come into contact with private keys that have any large amount of money, just enough to develop with.
  3. (PC3) Ubuntu server hosted through Digital Ocean and will be locked down through Digital Ocean Cloud Firewall and How To Secure A Linux Server as a guide. Disk and swap partition will be encrypted. Required to be connected to the internet.

The Plan:

On PC 2 I download a chosen Linux system (probably Tails as it leaves no trace on exit) onto a clean USB along with official software for the chosen blockchain used for creating PK’s. PC 1 boots into that Linux system through the USB. PC 1 generates a new PK for a wallet (one that will actually be used and will store money) and that key will be written down on paper. PC 3 is running a program I have written that interacts with the blockchain automatically and to sign transactions for me, it requires the PK of the wallet it’s interacting from. This wallet is the one created before that has the money. The program doesn’t pull the PK from any file, on startup of the program it will ask to type in the PK manually.

Potential Pitfalls:

  • Where I think the biggest point of failure is an attack at the point of entering in the PK in the program startup in PC 3. This is the only point in time the PK is exposed. My plan was to SSH in through PC 2 into PC 3 and start the program that way, but then any keylogger on PC 2 will catch me typing in the PK as well as any other passwords. I was thinking of maybe using PC 1 to SSH in, but that would require it to no longer be airgapped but at the same time if I use Tails OS could I not technically delegate a fresh ‘session’ to creating the airgapped PK then make another session that’s not airgapped to SSH in, but never mix the two activities?
  • PC 2 has malware that gets its way onto the USB and somehow messes with PC 1. Is there anyway I can make the USB transition from non airgapped PC 2 to airgapped PC 1 more secure?
  • Potential for a bad actor to get access to my Digital Ocean account and add their IP to PC 3’s firewall, allowing them to get one layer into PC 3, however they are still stuck behind the other protection methods (SSH key, data encryption, etc…)

Other than someone finding the piece of paper I wrote the PK on, is this system secure or is there something I need to do to make this more secure? Thanks!

attacks – Is this hypothetical system dealing with sensitive keys secure?

I’m a developer in the cryptocurrency space, dealing with private keys (PK) linked to wallets containing money and I’m interested to see if this system I plan to use is secure or if I’m missing something. I define secure as the chance of the PK being obtained by a bad actor being extremely low/negligible.

Computers:

  1. (PC1) Laptop. Was at one point connected to the internet but will be reformatted, then will probably boot into some Linux distribution through USB like Tails OS and potentially be Air Gapped.
  2. (PC2) Development PC connected to internet. Won’t come into contact with private keys that have any large amount of money, just enough to develop with.
  3. (PC3) Ubuntu server hosted through Digital Ocean and will be locked down through Digital Ocean Cloud Firewall and How To Secure A Linux Server as a guide. Disk and swap partition will be encrypted. Required to be connected to the internet.

The Plan:

On PC 2 I download a chosen Linux system (probably Tails as it leaves no trace on exit) onto a clean USB along with official software for the chosen blockchain used for creating PK’s. PC 1 boots into that Linux system through the USB. PC 1 generates a new PK for a wallet (one that will actually be used and will store money) and that key will be written down on paper. PC 3 is running a program I have written that interacts with the blockchain automatically and to sign transactions for me, it requires the PK of the wallet it’s interacting from. This wallet is the one created before that has the money. The program doesn’t pull the PK from any file, on startup of the program it will ask to type in the PK manually.

Potential Pitfalls:

  • Where I think the biggest point of failure is an attack at the point of entering in the PK in the program startup in PC 3. This is the only point in time the PK is exposed. My plan was to SSH in through PC 2 into PC 3 and start the program that way, but then any keylogger on PC 2 will catch me typing in the PK as well as any other passwords. I was thinking of maybe using PC 1 to SSH in, but that would require it to no longer be airgapped but at the same time if I use Tails OS could I not technically delegate a fresh ‘session’ to creating the airgapped PK then make another session that’s not airgapped to SSH in, but never mix the two activities?
  • PC 2 has malware that gets its way onto the USB and somehow messes with PC 1. Is there anyway I can make the USB transition from non airgapped PC 2 to airgapped PC 1 more secure?
  • Potential for a bad actor to get access to my Digital Ocean account and add their IP to PC 3’s firewall, allowing them to get one layer into PC 3, however they are still stuck behind the other protection methods (SSH key, data encryption, etc…)

Other than someone finding the piece of paper I wrote the PK on, is there anything im missing or should be weary of in this system? Thanks!

sensor – How are focal lengths derived from camera calibration (resectioning) and pixel sensitive area related?

In camera calibration (camera resectioning process, e.g. camera calibration with OpenCV) does the result for focal lengths fx and fy depend on the photo-sensitive area inside the pixel ?

If I have a camera sensor (CCD/CMOS) with a perfect square grid arrangement of pixel elements, but each pixel element has rectangular pixel sensitive area (photodiode).
For example, pixel dimensions are a x a, but photosensitive area dimensions are 0.5*a x a.
Would result of camera calibration still be fx = fy ?

And what if the pixel sensitive area has some arbitrary shape due to additional electronics in CMOS ?

How sensitive are acoustic side-channels to compression with a narrowband codec?

Assume sensitive audio emissions from a mechanical keyboard. These audio emissions are often sufficient to reconstruct the actual key presses that generated the sound. If the audio is compressed using a narrowband audio codec such as G.711, how much of the information is destroyed?

Put another way, can acoustic side-channel attacks ever be done using modern telephony?