computer architecture – L1 Cache Missing Timing Attack

I'm trying to understand Section 3: L1 Cache Missing in the Cache Missing for Fun and Profit newspaper. I'm stuck trying to discover how the secret channel is being built.

Specifically, I don't understand the following construction (especially confusing parts are highlighted):

Therefore, a secret channel can be constructed as follows: The Trojan
process allocates an array of 2048 bytes, and for every 32 bit word
If you wish to transmit, access the byte 64i of the matrix if the word bit i is set. The Spy process allocates an array of 8192 bytes, and repeatedly
measure the amount of time required to read bytes 64i, 64i + 2048,
64i + 4096 and 64i + 6144 for every 0 ≤ i <32.
Each memory access
performed by the Trojan will evict a cache line owned by Spy, which will cause the lines to reload from the L2 cache, which adds a
additional latency of approximately 30 cycles if memory accesses
they are dependent

I don't understand why the byte 64i needs to be accessed, or what is the relevance of bit i Being established is in this context. And I don't understand why the Spy process would have the pattern of reading bytes 64i, 64i + 2048, 64i + 4096Y 64i + 6144.

c # – The best architecture examples for a website that has many database transactions

Trying to understand what will be the best way to do it. You recently joined a team that is working on a web application using .NET MVC. It has more than 50 tables and will have many records processed every day (2-3 million).
They are not using EF. Most of the logic, including business logic, is stored in sql. I don't like this approach, since it is not a testable unit. The main reason for them to continue using SQL is the SQL transaction (they don't believe the .NET transaction is as good as the SQL transaction).

An example of flow is when a transaction occurs on the website, you must create a record in the main table, then create an audit record and then insert some GL records. This is what the stored procedure looks like (a container that will call several stored procedures)


How can I separate each part of the web services but still have the functionality of the transactions between them? I understand that I can chain multiple API calls and find out if one has failed. But if the first call succeeds and the second fails, it will leave obsolete data in the database.

I have examined the commitment of two phases, but I am not sure if any of the large systems do so. Is there a framework to help me do this? I tried to find out how it was done in TFS but I couldn't find any links.

Note that I am not looking for the EF vs. SP debate or how to make transactions in REST services. Instead, I am looking for a concrete architecture / tools that are being used by applications such as TFS.

Architectural patterns: How to perform transactions between services in a microservice architecture?

One question I am struggling with is how to perform linear operations / transactions between services in a distributed system.

For example, I have two services: orders and payments. What would you call the Orders to Payments service, without allowing the connection within the service?

As I understand it, the order service will issue an event such as requestPaymentfor a user and a sum, and the Payment service would react to that, process the payment and then issue something like paymentUpdated With a new status in payment. Is this correct? Is this not a disguised coupling, because events are very close?

Assuming this is the way to go, what would it look like from the perspective of the API consumer? Would all this happen while the application was hung?

I am not interested in Event Sourcing at this time.

Architecture choice for scattered data, tracked over time, including millions of text data

I want to organize the data of the company's articles in several companies.

I have 1 to 10 million of these elements and compile certain metadata about it.

  • Location (categorical)
  • first "appeared" (date)
  • division (categorical)
  • description (300 – 100 words) either text or html
  • up to 20-30 additional metadata (optional).

On average, an item has information about 5-10 of these available metadata.

Information about these articles is updated daily. I must gather the information in what period of time the article was available.

My attempts:

Since the data is recorded for several years, the data is very scarce. Then I thought of a sparse matrix to track the time horizon when the product was available. (The quality of the data is quite bad, so it could be the case that a product is falsely shown as unavailable and then reappears, so I consider tracking every day if the product is available).

For item descriptions, I considered a non-SQL approach but decided not to. Since in general the data is quite structured.

Currently, the data is still saved in flat files. If I switch to a database, is there a volume restriction if I have, for example, 1, 10, 50 million items?

Why do I ask here?

To update: (based on the comment):

Computer architecture: Out of order execution – Load / Store

In out of order execution, what happens if there is a Younger store running before a previous load. Will the load not get the wrong data as long as the younger store writes at the same address?
I understand the problem of memory disambiguation, that is when the youngest load depends on the oldest store What is the other side of my question.

To previous: load R1 <- R0(8) Younger B : Store R2 -> R3 (25)

The fact is that if B is executed before A with R3 (25) = R0 (8) (that is, the store writes in the address and then load instr obtains the new data if they are allowed to execute, which is incorrect !). How is this case handled?

database: architecture of "tags" for contacts in a marketing system and coded identifications so that functions can refer to them

I have these tables: contacts, tags, contact_tags.

(contact_tags it has columns contact_id Y tag_id.)

Administrators can manually create new tags through an internal website.

Administrators can also associate a tag with a contact (that is, create a new contact_tag)

The behaviors of visitors who are not administrators can also cause the creation of a contact_tag.

Certain tags are really important, and when one of these important tags is associated with a contact (that is, your tag_id it's in the contact_tag created manually by an administrator or through the behavior of a visitor), certain functions must be executed.

To achieve this, there are several tag IDs encoded in my code. This smells bad, but I haven't discovered a better approach, so I wonder what the best practice is.

For example, my code might say:

const TAG_DEPOSIT_PAID = 67;
//(and many other constants of IDs specified here too)

and then somewhere in ContactTagAddedEventListener:

if($tag->id == Tag::TAG_DEPOSIT_PAID){
    //do stuff

How can I improve my architecture?

(By the way, for testing purposes, my database planter generates a test database with all the "important" tags referred to by these different functions).

PS I understand that the enumerations are almost always bad and that the tables like my tags The table is generally better, but I clearly lack some other principles because my approach doesn't feel clean.

architecture: best practices to execute long-term periodic tasks?

We are creating an application whose main functions are chat bot and a control panel that highlights and summarizes information based on your chatbot conversations. Chat bot asks questions on various topics and, based on user input, we want to analyze the data to obtain insightful information for the users shown in that panel.

My question is about data processing, which will be an intensive process. We imagine that we would need a work machine that generates threads to do the processing and updating of the database with the findings.

What are today's best practices for running periodic jobs at scheduled times, say once a day at 1 a.m. Are cron jobs suitable for my purpose? I am not in a position to say how long a job would take per day, but if it takes 1 hour, should cron jobs run for so long? Are there best practices today for this?

The other idea I had was to have a message queue and a consumer instance that analyzed each user's data one by one. The queue would be receiving user ID. Therefore, the responsibility of the cron job would be to only queue all users once a day. Does this idea sound more feasible?


We are planning to stay in Heroku during the development phase, and if we like it, we could stay in Heroku for production, otherwise AWS. Does Heroku have any add-ons for long-term operations? Does AWS have equivalents?

Very much appreciated for any help you can get!

enterprise architecture: Azure reverse engineering infrastructure

I joined a project as a data architect and they have an enormous Azure implementation in expansion, due to the turnover of key people, they cannot give me any documentation of what is with confidence.

I am looking to evaluate a couple of tools for a quick victory at and Are you still a little surprised that there is no Azure native way of doing this? exporting to Visio? Am I missing a trick with a better method?

Information architecture: progress monitoring / UI / UX investigation of steps

I am currently working on an application where users complete a task flow, divided into several pages.

For this application, I am considering using a progress tracking / step-by-step UI element. However, I am not convinced of the real added value of that element. Help the user? Does the completion rate increase? In general, what are its main properties that are of value to the user?

I am looking for real empirical research. I found numerous articles online, for example:

UX design progress trackers

Quote: "If you know how many steps you must complete in the process,
is more likely to complete the process. "

Problems with this statement: there is no reference given,
and empirical evidence suggests that this statement lacks something very
important nuance (reference).

And that:

Progress trackers in web design: examples and best practices

Quote: "Progress trackers are designed to help users through a
Multi-step process and it is vital that such trackers are well designed
to keep users informed about which section they are currently
in which section they have completed and what tasks remain. "

The problems with this statement are the same as in the previous article. In
In addition, he cites three important aspects of a progress indicator:
keep the user informed of the location, what they have completed and what
tasks remain No empirical evidence referenced that (1) shows
an indicator of progress really does all these things and if it does
Is better; and (2) shows that these three important aspects are actually
Vital for a good user experience.

Until now, I have mainly found resources in which I only have to take these statements to the letter. And knowing that some statements lack nuances, or may simply be incorrect, it seems imprudent, in addition to defeating the purpose of a well-informed user experience design, simply assume that they are true and implement them.

My question now: can anyone point me in the right direction of real empirical research or the proper structured analysis of the elements of the user interface of the progress / step-by-step tracker / indicator and its real benefits / inconveniences / effects?

So far, I have looked:


But these pages are quite difficult to search, and terms like progress, progress indicator and the "likes" don't show much useful information …

KDB: How to serialize a table for a union within the kdb-tick architecture?

I am trying to modify the kdb-tick architecture to support a union union in the incoming data and the local rdb table.
I have modified the upd function in the tick.q file to the following:

    if(not -16=type first first x;a:"n"$a;x:$(0>type first x;a,x;(enlist(count first x)#a),x));
    f:key flip value t;pub(t;$(0>type first x;enlist f!x;flip f!x));if(l;l enlist (`ups;t;x);i+:1);};

With ups:uj subsequently established in the subscriber files.
My question refers to how you can serialize a row of the table before publishing it in the function.
That is to say. given a table:

second     |  amount price 
02:46:01   |  54     9953.5
02:46:02   |  54     9953.5
02:46:03   |  54     9953.5
02:46:04   |  150    9953.5    
02:46:05   |  150    9954.5

How should the first row be serialized? 02:46:01 | 54 9953.5 so that it can be sent through function for subscribers, by which uj will be executed between the row and the local table in the subscribers
Thanks in advance for your advice.