junit4 – How to mark Cucumber JUnit scenario as failed but continue with other scenarion

I am running cucumber with Junit. There are three scenarios. Each scenario has an Assertion to check the test displayed.


                 Assert.assertEquals(expectedMessage, actualMessage);

When I run the runner file if the assertion fails in scenario1 it marks scenario1 and will stop without executing the Scenarion2 and Scenarion3. If assertions fail in Scenarion1 how to mark the Scenarion1 as failed and continue with other Scenarios Scenarion2 and Scenarion3.If the assertion fails in between a scenarion mark that Scenario has failed and continues with the rest of the Scenario still all scenarios are executed.

user centered design – What would be a good scenario for a meeting with a potential client regarding a new web app?

I have an upcoming meeting with a potential client and about a web app for their business. Actually, we already agreed that I will make it for them. Now we will meet again to understand everything from their point of view, what features they need, in other words, understand their as a main daily user needs. This system will be used only by them in their company. This is just the beginning of this process. I really want to get this project right from the UX side (and of course later also the development part) and my question is:

What would be the best process to get all the necessary details from this meeting, so after it, I will know what they need and what to draw?

many to many – Strategies to get more performance in a specific scenario


Hi Guys, my name’s Joao, i’m currently working as backend and that’s my first post.
I’ll appreciate any help!

My team are working on a new feature. During the process of development we face a doubt about database performance, and so far we have some divided opinions… so here i’m rs

Tabble structure

For the example, a many-to-many relationship:

enter image description here

The scenario

Considering tables above, our API must serve an POST endpoint that, receive a Domain ID as a path variable and a json containing a list of Users Id, something like that:

Path : /domain/1

Body Json : 

With that list we must persist on the many-to-many domain_users that relationship between users and domains. Resume, in one endpoint we must manager the many-to-many table registers.

The question

We made this in two different ways, i’m here to ask your opinion what’s the more efficiently, performatic etc.. by the database side, thinking bout I/O

Strategie A (Clean and Insert)
First we delete ALL the registers in domain_users where the field domain_id is equals the Domain ID of the request, and then we insert in domain_users all users of the request list for that domain. Considering the above Json example, in this strategie we’ll perform one delete and one insert with two registers on the table domain_users.

Strategie B (Logical relationship)
First we compare the list of users that came with the request and the list of users that are already linked for that Domain ID, by a select on the domain_users where the field domain_id is equals the Domain ID of the request, so we execute these conditionals:
1 – There’s a user that came in the Json Request list and aren’t present in database list? Insert that!
2 – There’s a user that are present in the database list and aren’t present in the Json Request ? Delete that!
3 – There’s a user that are present in both list (database and Json Request) ? Do nothing.
In this strategie, the number of operations performed on the database depends of registers already present on the domain_users many-to-many table.

Which of the two strategies should we follow, thinking about better performance of database?

Obs : Our backend are in Java and the database we are working now is a PostgreSQL version 12

design – Is gRPC a good choice for my scenario?

Im starting to develop a visually-simple but infrastructural-ly robust real time multiplayer game to show off my backend skills and hopefully get a job at Blizzard or something like that.

The game is simple: a multiplayer snake game.

I’m aiming to have a game session supporting up to 8 players, all connecting to a single dedicated server.

Because of the high “latency-sensitiveness” of the game I was wondering if a duplex gRPC is a good choice for communication channel, or if perhaps there’s a better/more-recommended way to handle this?

Is that so or there is a better approach?

Why would more confirmations help in a 51% attack scenario?

Exchanges only process a deposit after a certain number of blocks have confirmed (been mined on top of) the block containing the transaction depositing coins to an exchange.

The more confirmations you wait for, the larger the number of blocks an attacker needs to rewrite. As the number of blocks requiring rewriting increases, so does the attacker’s required hashpower.

To rewrite a history of only 1 block, you need to mine 1 block. To rewrite 10 blocks, you need to rewrite 10 blocks in a row, faster than the rest of the miners can extend the existing 10 block chain further. The difficulty of pulling this off increases exponentially as the length that must be rewritten increases.

By increasing their confirmation requirements, exchanges can make it infeasible/too expensive for an attacker to complete a double spend attack.

algorithms – Issue triangulating a font glyph for a specific scenario

I am trying to triangulate TTF glyphs for drawing in opengl. The below image shows an example glyph (Taken fron FontForge application). The X represent control points of a bezier spline, circular and square points represent points on the glyph but they can also be the start / end points of the bezier splines.

I have a triangulation process that can take all these points and render them in opengl, I’d like to render the bezier curves in a shader, I have something that does this and it works. They way I split things up is to take the control points, the start points and end points of the bezier spline and pass those to the bezier shader. For the curves that curve out side the shape I need to remove the control points from the triangulation routine (because I don’t want to fill in these triangles completely, I leave that to the bezier shader). For the control points that curve in to the shape I keep them for the opposite reason to above.

TTF Glyph Original

For the example above I have shown some additional red lines showing what the new outline would look like with the outer control points removed. That can be seen below.

This shows an example of the issue I am having, I can no longer triangulate this because the red lines at the bottom cross outside the shape and then back in again.

Is there a better algorithm I can use to do this to avoid situations like this?

TTF Glyph Post Curve Triangle Removal

c# – Would structs be better than classes in this scenario?

I’ve recently been developing a networking application layer (or at least attempting to) for my game I’ve been working on. I think I’ve got a decent basic idea for the system now, but there is multiple ways of implementing it (I refer to the two difference ways as class-inheritence and struct-interface) and I’m at a crossroads between choosing which way is better.

The basic idea for the system is:

  1. Each packet definition and implementation is contained within it’s own .cs file.
  2. These packet files will be contained in a SharedProject so they can be referenced by both the client and the server projects from a single source.
  3. Each packet will have the same member variables, but will be handled differently depending on the target build (e.g. PLAYER_CLIENT, GAME_SERVER) using a conditional compiler directive, so they can access target specific namespaces, classes, etc.
  4. The header (aka the “id”) of each packet will be computed and not hardcoded. Right now I’m doing it at runtime (during the initialization), which carries some limitations.

Here are examples of what the system would look like:

There are few things to notice:

  1. I am uncertain about a few choices I made here.

    In the struct-interface system, when the PacketManager receives a packet, it uses a compiled lambda expression to create an instance of the appropriate type based on the header it reads.

    In the class-inheritance system, to get a packet instance from type (e.g. for serializing and sending), I use e.g. PacketManager.GetPacketInstance(typeof(ExamplePacket)) as ExamplePacket;

    I guess for performance reasons, is there anything here that looks really bad? I’m okay having a sub-optimal system, as long as it’s not actually terrible.

  2. In the struct-interface system, packet files are longer as they have to contain the redundant boilerplate code for throwing “not implemented” exceptions. Default method implementations can be used to eliminate this problem, which is what the class-inheritance system does. As of C# 8, interfaces can have default method implementations as well, but unfortunately I use Unity which doesn’t support them yet. If Unity were to support them in the future, would it even be an appropriate use in this scenario? From what I’ve read, it wouldn’t really fit what they were intended for; but it would perhaps have the same effect and solve the problem nonetheless?

  3. In the struct-interface system, packets are able to be sent without referencing some dictionary or pool (I cache the packets to minimize GC in the class-inheritance system); so they can be created from anywhere.

  4. Perhaps knocking any benefit from point 3, in both systems the process of sending packets requires interacting with the PacketManager class: the struct-interface system uses it to serialize and send the packet, and the class-inheritance system uses it to get a pooled instance.

A few more notes:

I’ve thought about using codegen to try and alleviate the limitations that come with the struct-interface system (i.e. reduce the amount of repetitive work and perhaps to also compute headers pre-build), but I have a hunch that doing so may cause more issues down the road by adding another layer of “complexity” to the system. I don’t plan on multithreading the system, and in the unlikely event that I do, it probably won’t go beyond a couple of threads.

I guess most of these “limitations” are pretty minor. Both systems are quite similar and would probably work fine either way.

My intuition tells me that structs are better suited as being the container for packets. A couple of reasons for this for this are:

  • Size is not a concern. On average, the packets will by tiny (< 16 bytes) and are passed by reference anyways to avoid copying.
  • Packets are “short lived”. They aren’t used anymore after handling or sending.

I guess I should make it clear I’m referring to my own “RpcPackets” here, not the ENet packets that might also appear in my code examples.

Please correct me if I’ve made any wrong assumptions (especially on the correct use of structs) in this post.

I want to know if I’m correct in thinking that structs are an appropriate choice in this situation, or if I’d actually be better off using classes. Also any insights or opinions about the system in general (I hope nothing is too bad) would be greatly appreciated.

multithreading – Java blocking queue download process scenario

I have the following scenario .

I have incoming file download requests and each download is happening in different thread until pool size is exceeded. And after a download completed, a processor processes the downloaded item. So I created the following. I wonder if uses of thread and executors make sense


public class DownloadTaskEnqueuer {
    private static final BlockingQueue<Task> downloadQueue = new LinkedBlockingQueue<>();
    private static final BlockingQueue<Task> processQueue = new LinkedBlockingQueue<>();
    private static final ExecutorService executor = Executors.newCachedThreadPool();

    public void offer(Task task) {
        return downloadQueue.offer(task);

    public void createPool(int size) {
        for (int i = 0; i < size; i++) {
            executor.execute(new DownloadTask(downloadQueue, processQueue);
            executor.execute(new ProcessTask(processQueue));

Download task

public class DownloadTask implements Runnable {
    private BlockingQueue<Task> downloadQueue;
    private BlockingQueue<Task> processQueue;
    // constructor for initing two queue

    public void offer(Task task) {
        return processQueue.offer(task);
    public void run() {
        while (true) {
           Task task = downloadQueue.poll();
           if (task != null) {
           } else {
               // sleep 250 ms 

Process task

public class ProcessTask implements Runnable {
    private BlockingQueue<Task> processQueue;
    // constructor for initing queue
    public void run() {
        while (true) {
           Task task = processQueue.poll();
           if (task != null) {
           } else {
               // sleep 250 ms 

Use case (pseudo)


listener.listen((task) -> {

multithreading – Java 2 grouped thread scenario

I have the following multi thread scenario and the codes. It looks kind of messy and want to know if I can make it better or if there is any flaw to fix.

When a request arrives, one thread downloads it and the remaining operations are dependent on download so it needs to wait for download to finish. But there can be many request coming and download tasks must run at the same time but each process operation need to wait for its own download task. So I did the following.


public interface ProcessCompletedListener {
    void onComplete(Object object);


public interface RequestListener {
    void onRequest(Object request);

Receiver class where I send request for testing purpose.

public class Receiver {
private RequestListener requestListener;

public void setRequestListener(RequestListener requestListener) {
    this.requestListener = requestListener;

public void requestBomb() {
    String() names = new String(){"a", "b", "c"};

    int i = 0;
    while (i < 3) {
        try {
        } catch (InterruptedException e) {

Custom blocking queue

public class CustomBlockingQueue {
private BlockingQueue<Object> blockingQueue = new LinkedBlockingDeque<>();
private ProcessCompletedListener processCompletedListener;

public boolean offerAndProcess(Object object) {
    if (blockingQueue.offer(object)) {
        return true;
    return false;

public void setProcessCompletedListener(ProcessCompletedListener processCompletedListener) {
    this.processCompletedListener = processCompletedListener;

public boolean containsFileEvent(Object object) {
    return blockingQueue.contains(object);

private void process(Object object) {
    new Thread(() -> {
        System.err.println(object + ": Request process started.");

        try {
            Thread.sleep(6000); // mock for real operation
        } catch (InterruptedException e) {
        System.err.println(object + ": Request process completed.");

        if (processCompletedListener != null) {

Demo class

public class Demo {
public CustomBlockingQueue customBlockingQueue;

public void setCustomBlockingQueue(CustomBlockingQueue customBlockingQueue) {
    this.customBlockingQueue = customBlockingQueue;

public CustomBlockingQueue getCustomBlockingQueue() {
    return customBlockingQueue;

public static void main(String() args) {
    Demo demo = new Demo();
    demo.setCustomBlockingQueue(new CustomBlockingQueue());

    Receiver receiver = new Receiver();
    Thread t = new Thread(() -> {
        receiver.setRequestListener((request) -> {
            new Thread(() ->
                    demo.getCustomBlockingQueue().setProcessCompletedListener((Object object) -> {
                        System.err.println(object + ": post process started");
                        try {
                            Thread.sleep(1500); // mock for real operation
                        } catch (InterruptedException e) {
                        System.err.println(object + ": post process completed");

java – Effective thread usage under blocking scenario

I have the following scenario.

I am listening file requests and when one arrives, I am starting download task in a new thread. After download task ends, a process task starts but this is important, process task must wait for download task. To do that, I can use dt_thread.join() or just stop using thread and download in blocking way. But in both cases, upcoming file requests get blocked and this turns out to be a performance issue.

I need to handle download tasks in threads but also need to ensure that process tasks start after relevant download task.

What kind of thread logic can I apply?

public void activateListener() {
    fileRequestService.listen((name) -> {
        DownloadTask dt = new DownloadTask(); // DownloadTask implements Runnable
        Thread dt_thread = new Thread(dt);

        ProcessTask pt = new ProcessTask(); // ProcessTask implements Runnable
        Thread pt_thread = new Thread(pt);