domain driven design – CQRS denormalized data query with multiple aggregates

Let’s say there’s a domain that’s similar to Reddit.

Aggregate roots are

Board // you can ignore this
  - boardId
  - postId
  - userId
  - userId
  - username

where each aggregate emits

  - PostCreated
  - UserCreated
  - UsernameUpdated

for the query side, the query will try to fetch a page of posts in a specific board.

Given this, the denormalized data should look as below

posts: ({
  title: 'some post title',
  body: 'some post body',
  author: 'user123',

Now in the denormalised database, I’d create post entry when the PostCreated event is received. And the received event holds userId.

To populate the author field for the read model above with that user ID, I can do either

  1. Read username from the existing denormalized User data, and save the post read model with the username.
    • This will require updating ALL posts when UsernameUpdated event is handled.
  2. Create a join column between User and Post. and when query is requested, join the table to populate the author field with username

Is it so obvious that the second method is the way to do it? The reason why it’s confusing me is that the denormalised database feels almost like a giant monolithic database (thinking of adding other aggregate root’s events to the read model E.g., Board). Or is denormalized data is just like this?

java – Leap Year Test Driven Development

I have asked to do an assignment on Leap Year API following test-driven development. But I scored less in the task and I don’t know where I have gone wrong. Can anyone please review my code in the git repository? Since they told me to commit on adding each test cases, I have done it in that manner. I am sharing the github link of my commit to get the accurate feedback.

Problem Statement:

Here’s the assessment: Please code this in Java using TDD. Make sure you keep committing as you add test cases, so we can see how your code evolves

My Code:


package com.tdd.leapyear;

public class LeapYear {
    public boolean isLeapYear(int year) {
        if (isDivisibleBy400(year)){
            return true;
        }else if (isDivisibleBy100(year) && (!isDivisibleBy400(year))){
            return false;
        }else if (isDivisibleBy4(year) && !isDivisibleBy100(year)){
            return true;
        }else return isDivisibleBy4(year);

    public boolean isDivisibleBy100(int year) {
        return (year % 100) == 0;

    public boolean isDivisibleBy400(int year) {

        return (year % 400) == 0;

    public boolean isDivisibleBy4(int year) {
        return (year % 4) == 0;

package com.tdd.leapyear;

import org.junit.After;
import org.junit.Before;
import org.junit.Test;

import java.util.Arrays;
import java.util.List;

import static org.junit.Assert.*;

public class LeapYearTest {

    private LeapYear leapYear;

    public void setUp() {
        this.leapYear = new LeapYear();

    public void when2000_thenIsLeapYear(){
        assertTrue("Method should treat 2000 as a leap year", this.leapYear.isLeapYear(2000));

    public void when2008_thenIsLeapYear(){
        assertTrue("Method should treat 2008 as a leap year", this.leapYear.isLeapYear(2008));

    public void when200_thenIsDivisibleBy100(){
        assertTrue("Method should return true for 200 as it's divisible by 100",

    public void when350_thenIsDivisibleBy100(){
        assertFalse("Method should return false for 350 as it's not divisible by 100",

    public void when200_thenIsDivisibleBy400(){
        assertFalse("Method should return false for 200 as it's not divisible by 400",

    public void when800_thenIsDivisibleBy400(){
        assertTrue("Method should return false for 800 as it's divisible by 400",

    public void whenNonLeapYear_thenIsLeapYear(){
        List<Integer> nonLeapYears = Arrays.asList(1700, 1800, 1900, 2100, 2017, 2018, 2019);
        for (Integer year: nonLeapYears) {
            assertFalse(year + " should be non leap year, but isLeapYear method" +
                    " returned True", this.leapYear.isLeapYear(year));

    public void whenLeapYears_thenIsLeapYear(){
        List<Integer> validLeapYears = Arrays.asList(2012, 2016);
        for (Integer year: validLeapYears) {
            assertTrue(year + " should be non leap year, but isLeapYear method" +
                    " returned True", this.leapYear.isLeapYear(year));

    public void when200_thenIsDivisibleBy4(){
        assertTrue("Method should return true for 200 as it's divisible by 4",

    public void when35_thenIsDivisibleBy4(){
        assertFalse("Method should return false for 35 as it's not divisible by 4",

    public void tearDown() {
        this.leapYear = null;

Search Engine in a Database Driven Site


y u no do it?

Advertise virtually anything here, with CPM banner ads, CPM email ads and CPC contextual links. You can target relevant areas of the site and show ads based on geographical location of the user if you wish.

Starts at just $1 per CPM or $0.10 per CPC.

Architecture – Data Driven Angular Shapes

Our company is looking for strategies, where angular forms and HTML / CSS design are entirely data-driven.

The following is being created as a framework.

In the following code, we have three formControlNames: CustomerName, Address, Purchase Amount.
They are in the upper left corner, with a CSS width of 450 px and a height of 300 px.

The question is, is this the proper architecture strategy for web design? We are hearing that data-based forms are a common strategy.

* Personally, I'm not sure what values ​​to buy, it is essentially converting web design to JSON, plus we lose Visual Studio Code IDE AutoComplete capability and / typechecking syntax.

"groupName": "topSection",
    "groupStructure": "gTransparentContainer",
    "subgroups": [
        "groupName": "customerDetails",
        "title": "Customer Form",
        "detail": "Please review the following information before purchase.",
        "groupStructure": "gTopLeftSection",
        "width": "450px",
        "height": "300px",
        "controls": [
          { "controlType": "pReadOnly", "controlName": "customerName", "label": "Customer Name",
            "width": "250px"
          { "controlType": "pReadOnly", "controlName": "address", "label": "Address Information",
            "width": "200px", "value": "AF - Address File Layout"},
          { "controlType": "pReadOnly", "required": true, "controlName": "purchaseAmount", "label": "Purchase Total", "width": "250px",


design: event driven job scheduler architecture

I have a cron based job scheduler that triggers time based jobs. This was simple to think about and design because the event in this case is well defined (the event is time).

Now I want to create a generic event-based job scheduler that can trigger a job when any event occurs. Example of events: file tap, directory creation, Jira request creation, other job failure, etc. Basically the event could be anything.

I'm not sure how I go about designing this system, especially from the user's perspective. How will my users define the event, how will my system react to these events to start Job, etc.?

compilers – Syntax Driven Syntax Driven Translation Alternative

I am implementing a C toy compiler. For the code verification and type generation phases, I have used the syntax driven translation method (which is covered in detail in the Dragon book); Looking at some small existing implementations for the C compiler, I found that they also use the method.

I am curious to know if there are alternative methods besides the syntax driven translation.

Are the increments of a stochastic process driven by the independent fractional Brownian movement?

I am studying the following equation in the context of population dynamics.
$$ tag1
dX_t = mu X_t dt + sigma X_t dB ^ H_t

where $ B ^ H $ is the fractional Brownian motion (fBm) of the Hurst parameter $ H in (0,1) $, that is a continuous Gaussian process that starts at zero, with $ B ^ H_t sim mathcal N (0, t2H) $ and with covariance $ mathbb E (B ^ H_t B ^ H_s) = frac12 (| t | ^ 2H} + | s | ^ 2H} – | t-s | ^ {2H}) $.

According to the value of $ H $

  • Yes $ H = 1/2 $ then $ B ^ H $ it's the classic brownian movement
  • Yes $ H <1/2 $ then increments of $ B ^ H $ are negatively correlated
  • Yes $ H> 1/2 $ then increments of $ B ^ H $ are positively correlated

In addition, the increase process $ B ^ H_ {t + 1} -B ^ H_ {t} $ It is called fractional Gaussian noise (fGn) and has covariance $ gamma (k) = frac12 (| k-1 | 2H} -2 | k | 2H} + | k + 1 | 2H) $.

To run numerical simulations, we first have to find estimators for the parameters $ mu $ Y $ sigma $.

In this work, investigations derive the maximum likelihood function in this way.

Leave $ f, g $ be two functions of $ X_t $ and of $ theta $, vector of unknown parameters. To consider
$$ tag2
dX_t = f (X_t, theta) dt + g (X_t, theta) dB ^ H_t

the first and second moment of the increases in $ X $ are given by
mathbb E (dX | X, t) = f (X_t, theta) dt

ma E ((dX) ^ 2 | X, t) = g ^ 2 (X_t, the) (dt) 2H}.

Division $ (0, T) $ how $ 0 = t_0 <t_1 <… <t_N = T $ S t. $ Delta t = t_ {i + 1} -t_i = T / N $, the SDE $ (2) $ can be approximated by the Euler-Maruyama method as
$$ tag3
X_0 = x_0, X_ {n + 1} = X_n + f (X_n, theta) Delta t + g (X_n, theta) Delta B ^ H_n

where $ Delta B ^ H_n = B ^ H_ {t_ {n + 1}} – B ^ H_ {t_n} $ (in the mentioned document, $ Delta B ^ H_n $ It is not explicitly defined, but I guess the definition is what I wrote here) is the fGn and $ 0 le n le N-1 $.

The probability density function of $ (X_ {n + 1}, t_ {n + 1}) $ from $ (X_n, t_n) $ is then
$$ tag4
color {red} {p_X} = frac {1} { sqrt {2 pi g ^ 2 (X_n, theta) ( Delta t) 2H}}} exp Bigg (- frac { (X_ {n + 1} -X_n-f (X_n, theta) Delta t) ^ 2} {2g ^ 2 (X_n, theta) ( Delta t) 2H}} Bigg)

and the joint density gives the probability function $ math L $, whose maximizers are the parameter estimates $ mu $ Y $ sigma $.

For the initial site $ (1) $ we have $ f (X_t, theta) = mu X_t $ Y $ g (X_t, theta) = sigma X_t $Thus
$$ tag5
color {red} { mathcal L ( mu, sigma) = prod_ {n = 0} ^ {N-1}} frac {1} { sqrt {2 pi sigma ^ 2X ^ 2_n ( Δt) 2H}} exp Bigg (- frac {(X_ {n + 1} -X_n- mu X_n Delta t) ^ 2} {2 sigma ^ 2X ^ 2_n ( Delta t) 2H Bigg)

The first question is related to the first. $ color {red} { text {red}} $ term: is the formula $ (4) $ for the pdf of the process increments $ X $, defined by $ (3) $Right?

The second question is related to the second. $ color {red} { text {red}} $ term: is the formula $ (5) $ for the joint density (probability function) of the process increments $ X $, defined by $ (3) $Right?

On the second question, my doubt is that, since the increases in fBm are not independent, perhaps also the increments of the process $ X $, defined by $ (3) $, driven by the fBm are not independent. If this were the case, then we could not write the joint density of the increases in $ X $ as a product of individual densities. How to test if the increments of $ X $ Are they independent or not?

Design – Handle failures in event driven architecture

Suppose you have a bounded context with ~ 30 business events and, for simplicity, the same number of commands as ChangeUserEmailCommand -> UserEmailChangedEvent. The processing of a command may fail for the following main reasons (in addition to infrastructure failures, of course):

  1. Validation problem (unique email)
  2. Technical problem (optimistic mismatch of the concurrency version)

I'm interested in what is the best practice to point out failures? Would you create 30 more events like ChangeUserEmailFailedEvent? If not, what is your general rule for which events create a paired *FailedEvent? Would you create just one ConcurrencyFailedEvent For all concurrency problems add a type of source command as part of your payload?

dnd 5e – Spell Level and Brain Driven Preparation

For a baseline, compare the Pearl of Power. This rare magic item, which requires tuning, can restore a low-level spell space for a character. It is clearly much less powerful than the spell you describe.

The DMG (p.128) suggests that a magic item can only be created by a spellcaster that can cast any spell it produces, and also provides guidance that the 3rd level is a reasonable point at which a character could create a Unusual magic item. From this we can deduce that the effect of a Pearl of Power, much weaker than Brew beer, should be a second level spell (available to a third level caster) (1)

But you are creating something much more powerful. Restore all spell slots, including the one used to cast it, and do it for a whole party, as well as all class features (not to mention the restored HP).

There is not much guidance on the level of this spell, since, for the most part, the 5th edition does not allow this type of renewal of repeatable resources. I would suggest that if you allow this spell, should Having a material cost, otherwise, will essentially always be used. There is a small and precious reason for not doing so. Also, I would suggest that the beer created must expire sometime. Otherwise, it may not cost any resources, since it can simply be created and saved during downtime.

Based on very few precedents, I would also recommend that this be a level 9 spell. On any other spell level, there is very little cost to simply save a spell space for Brew beer. At least on the ninth level, it is guaranteed that the pitcher will abandon something in a previous encounter to be able to use it (although even then, They still have a ninth level space after using this spell.)

This spell causes all kinds of problems. Wizard's armor (and other 8-hour spells) can now be used without taking a spell space, just drink beer after using them. Warlocks, monks and other classes balanced around short breaks are drastically weakened, while wizards can now make all stops twice a day. Wizards can now change their prepared spells for short breaks, making them much more versatile. Only once before taking a "real" break, yes, but even two long breaks per day is more than the game is balanced.

In short, this spell will have extreme consequences for the game's balance, even at the ninth level. I'm not sure it's worth doing just for the joke. I recommend drastically reducing its reach before trying to give it a level.

If the intention is for this spell to replace your "real" rest, then you most likely want to rephrase it. As written, the spell does not make your short break become a long break, and you can benefit from more than one long break per day. If instead you clarify that this is a long rest for the purposes of the rule that dictates a long rest for 24 hours, then this solves many of the spell's problems.

(1) Obviously, the exact effect of the Pearl of Power would be problematic on its own as a second level spell; This is just to give an example of how the proposed spell is much more powerful than a cantrip.

[ Politics ] Open question: Are you glad that Ukraine has finally announced the investigation that was driven by Trump's efforts?

Do you know where they will investigate if Trump's associates stalked the US ambassador. UU. In Ukraine, Marie Yovanovitch?