Do you consider useful a Java tool that records JUnit and Mockito unit/integration tests from runtime calls?

I know TDD is the way, but there might be cases when generating tests from existing code could be useful:

  • when you need to maintain code that does not have unit tests
  • when, for some reason, in your company there is no time/budget/culture/wiliness to do TDD, but having generated tests is still better than having no tests
  • when the tests and mocks are long and hard to write and you could use a jumpstart
  • when you are doing big changes in the design and a lot of the tests need to be rewritten
  • when you want to record a functional test with real users data

I imagine this tool would work like this:

  1. You mark a method with an @RecordTest annotation.
  2. You mark some injected dependencies with @RecordMockForTest annotation.
  3. You run the project and interact with UI/API.
  4. Context, arguments and results are retrieved using AOP and reflection and used to generate a test file on disk for each function call.

Example: Let’s say you have some function for calculating Employee salary that does not have automated tests yet.

public class SalaryService {
    public double computeEmployeeSalary(int employeeId) throws Exception {
        // ...
        Employee employee = employeeRepository.getEmployee(employeeId);
        // ...
        return salary;
    }
}

public class EmployeeRepository {
    public Employee getEmployee(int id) throws Exception {
        // ...
        // Get Employee from DB
        // ...
        return employee;
    }
}

You add @RecordTest to computeEmployeeSalary function to mark that you want tests generated from the calls to this function.

You add @RecordMockForTest to EmployeeRepository class to mark that you want this class mocked in the tests.

The resulted test will be:

import org.junit.jupiter.api.Test;
import static org.junit.jupiter.api.Assertions.*;
import com.sampleapp.model.Department;
import com.sampleapp.model.Employee;
import com.sampleapp.services.EmployeeRepository;
import static org.mockito.Mockito.*;

class SalaryServiceTest {
    @Test
    void computeEmployeeSalary() throws Exception {
        // Arrange
        Department department1 = Department.builder()
            .id(100)
            .name("IT")
            .build();
        Employee employee1 = Employee.builder()
            .id(1)
            .firstName("John")
            .lastName("Doe")
            .department(department1)
            .salaryParam1(1000.0)
            .salaryParam2(1500.0)
            .salaryParam3(200.0)
            .build();
        EmployeeRepository employeeRepository = mock(EmployeeRepository.class);
        when(employeeRepository.getEmployee(1)).thenReturn(employee1);
        SalaryService salaryService = new SalaryService(employeeRepository);

        // Act
        double result = salaryService.computeEmployeeSalary(1);

        // Assert
        assertEquals(4000.0, result);
    }
}

I am thinking to write an open source tool for this. It would be very helpful for me to hear more opinions before investing effort in a direction that might not be so useful.

dependency injection – Is there a proper way of implementing runtime control of dependencies using DI? Is factory pattern okay?

I’m currently brushing up and learning about a bunch of techniques to hopefully begin implementing in my own workflow; one of which is IoC (and DI in particular).

I’m hoping someone could clear up my confusion I have reading two articles about the subject:

In this post, the author seems to demonstrate that you can use the factory pattern alongside DI, with the goal of enabling runtime control of which implementation of the dependency is used.

In this Microsoft doc, they seem to recommend avoiding this approach (or rather, to avoid mixing it and any service locator pattern, with DI). I’m not sure if this means there’s always a better alternative, or rather to simply avoid it in most scenarios but there might be some exceptions where there’s merit (e.g. runtime control).


I guess another potential view at the question could be: when using DI, should runtime control of dependencies be avoided just as much as mixing with service locator pattern, as to reduce the need for things like service locator pattern?

I’m writing this with pretty much no experience using DI yet, so apologies if I’m somehow missing the big picture.

c++17 – Safe runtime numeric casts

The rationale behind this code is to implement a runtime-safe number conversions in situations when precision loss is possible, but is not expected. Example: passing a size_t value from a 64bit code to a library (filesystem, database, etc.) which uses 32bit type for size, assuming you will never pass more than 4Gb of data. Safety here means cast result would have an exactly same numeric (not binary) value (i.e. any rounding, value wrapping, sign re-interpretation, etc. would be treated as casting failure). At the same time, simple impilicit casting for maximum performance is highly desired. This is especially useful for template classes, which usually assumed to have no special treatment to the types they operate on. Since it would be used in many places of my code, I’m wondering if I’ve overlooked something.

Here’s the code (note that “to”-template argument goes before “from”-argument for automatic argument deduction in real-world usage):

#include <limits>
#include <type_traits>
#include <stdexcept>
#include <typeinfo>

class SafeNumericCast {
    protected:
        enum class NumberClass {
            UNSIGNED_INTEGER, SIGNED_INTEGER, IEEE754
        };

    protected:
        template <typename T> static constexpr NumberClass resolveNumberClass() {
            static_assert(std::numeric_limits<T>::radix == 2, "Safe numeric casts can only be performed on binary number formats!");
            if constexpr (std::numeric_limits<T>::is_integer) {
                if constexpr (!std::is_same<T, bool>::value) { // NOTE Boolean is conceptually not a number (while it is technically backed by one)
                    return std::numeric_limits<T>::is_signed ? NumberClass::SIGNED_INTEGER : NumberClass::UNSIGNED_INTEGER;
                }
            } else if constexpr (std::numeric_limits<T>::is_iec559) {
                return NumberClass::IEEE754;
            }
            throw std::logic_error("SafeNumericCast > Unsupported numeric type!");
        }

    public:
        template <typename TTo, typename TFrom> static constexpr bool isSafelyCastable() {
            if constexpr (!std::is_same<TTo, TFrom>::value) {
                const NumberClass toNumberClass = resolveNumberClass<TTo>();
                const NumberClass fromNumberClass = resolveNumberClass<TFrom>();
                if constexpr (toNumberClass == NumberClass::UNSIGNED_INTEGER) {
                    if constexpr (fromNumberClass == NumberClass::UNSIGNED_INTEGER) {
                        return std::numeric_limits<TTo>::digits >= std::numeric_limits<TFrom>::digits;
                    }
                } else if constexpr (toNumberClass == NumberClass::SIGNED_INTEGER) {
                    if constexpr ((fromNumberClass == NumberClass::UNSIGNED_INTEGER) || (fromNumberClass == NumberClass::SIGNED_INTEGER)) {
                        return std::numeric_limits<TTo>::digits >= std::numeric_limits<TFrom>::digits;
                    }
                } else if constexpr (toNumberClass == NumberClass::IEEE754) {
                    if constexpr ((fromNumberClass == NumberClass::UNSIGNED_INTEGER) || (fromNumberClass == NumberClass::SIGNED_INTEGER) || (fromNumberClass == NumberClass::IEEE754)) {
                        return std::numeric_limits<TTo>::digits >= std::numeric_limits<TFrom>::digits;
                    }
                }
                return false;
            }
            return true;
        }
        template <typename TTo, typename TFrom> static constexpr TTo cast(TFrom value) {
            static_assert(isSafelyCastable<TTo, TFrom>());
            return value;
        }
};

class SafeRuntimeNumericCast : public SafeNumericCast {
    private:
        template <typename TTo, typename TFrom> static constexpr bool isRuntimeCastable(TFrom value, TTo casted) {
            static_assert(!SafeNumericCast::isSafelyCastable<TTo, TFrom>());
            const NumberClass toNumberClass = resolveNumberClass<TTo>();
            const NumberClass fromNumberClass = resolveNumberClass<TFrom>();
            if constexpr (toNumberClass == NumberClass::UNSIGNED_INTEGER) {
                if constexpr (fromNumberClass == NumberClass::UNSIGNED_INTEGER) {
                    return value <= std::numeric_limits<TTo>::max();
                } else if constexpr (fromNumberClass == NumberClass::SIGNED_INTEGER) {
                    if (value > 0) {
                        return value <= std::numeric_limits<TTo>::max();
                    }
                } else if constexpr (fromNumberClass == NumberClass::IEEE754) {
                    return casted == value;
                }
            } else if constexpr (toNumberClass == NumberClass::SIGNED_INTEGER) {
                if constexpr (fromNumberClass == NumberClass::UNSIGNED_INTEGER) {
                    return value <= std::numeric_limits<TTo>::max();
                } else if constexpr (fromNumberClass == NumberClass::SIGNED_INTEGER) {
                    return ((value >= std::numeric_limits<TTo>::min()) &&(value <= std::numeric_limits<TTo>::max()));
                } else if constexpr (fromNumberClass == NumberClass::IEEE754) {
                    return casted == value;
                }
            } else if constexpr (toNumberClass == NumberClass::IEEE754) {
                if constexpr (fromNumberClass == NumberClass::UNSIGNED_INTEGER) {
                    return value <= (1ULL << std::numeric_limits<TTo>::digits); // NOTE Can't do "casted == value" check because of int-> float promotion
                } else if constexpr (fromNumberClass == NumberClass::SIGNED_INTEGER) {
                    return static_cast<TFrom>(casted) == value; // NOTE Presumable faster than doing abs(value)
                } else if constexpr (fromNumberClass == NumberClass::IEEE754) {
                    return (casted == value) || (value != value);
                }
            }
            return false;
        }
    public:
        using SafeNumericCast::isSafelyCastable;
        template <typename TTo, typename TFrom> static constexpr bool isSafelyCastable(TFrom value) {
            if constexpr (!SafeNumericCast::isSafelyCastable<TTo, TFrom>()) {
                return isRuntimeCastable<TTo>(value, static_cast<TTo>(value));
            }
            return true;
        }
        template <typename TTo, typename TFrom> static constexpr TTo cast(TFrom value) {
            if constexpr (!SafeNumericCast::isSafelyCastable<TTo, TFrom>()) {
                TTo casted = static_cast<TTo>(value);
                if (isRuntimeCastable<TTo>(value, casted)) {
                    return casted;
                }
                throw std::bad_cast();
            }
            return value;
        }
};

The usage is simple:

SafeNumericCast::cast<uint64_t>(42); // Statically check for possible precision loss
SafeRuntimeNumericCast::cast<float>(1ULL); // Dynamically check for precision loss

SafeNumericCast::isSafelyCastable<uint64_t, uint32_t>(); // Non-throwing static check
SafeRuntimeNumericCast::isSafelyCastable<float>(1ULL); // Non-throwing dynamic check

Here are the assumptions the code is based on:

  • The code is working only on 10 built-in binary numeric types – this is intended for now
  • Any unsigned integer can be exactly represented by another signed or unsigned integer as long as it has enough digit capacity, otherwise a runtime value check against max value is required
  • Any signed integer can be exactly represented by another signed integer as long as it has enough digit capacity, otherwise a runtime value check against min and max values is required
  • Signed integer can’t be generally represented by an unsigned integer, so a runtime value check is required. We can’t simply compare by value due to sign re-interpretation during signed/unsigned promotion, so we have to separately check for negative sign and positive value against possible max value
  • Integers can be represented by an IEEE 754 float as long as it has enough digit capacity, otherwise a runtime value check is required. We can’t simply compare by value due to possible rounding during integer/float promotion, so we have to manually check against maximum representable integer.
  • IEEE 754 floats can’t be generally represented by an integer, so we have to check at runtime by simply comparing original and casted values. This should also cover NaN/Infinity/etc cases.
  • Any IEEE 754 float can be exactly represented by another IEEE 754 float as long as it has same or bigger size (that is, double simply has more capacity for both mantissa and exponent, thus any float is exactly representable by a double). Otherwise, a simple runtime value comparison is required. The only corner case is NaN and std::isnan() is, unfortunately is not constexpr, but we can work it around by checking value != value.

pytorch – How can I solve RuntimeError: cuda runtime error (59) : device-side assert triggered error

There a part that I cannot understand when I train the deep learning model on colab.

When I first train the model, measuring training loss and valid loss at the same time, it works for every batch.

However, when I train again in the same cell, or train another model in a different cell, RuntimeError: cuda runtime error (59) : device-side assert triggered error happens suddenly.

I don’t think it is because of index error from loss function, because
if it was error, the first trial should not success!

Is here anybody who can solve this problem?

I really need help!!

Unique elements, long runtime Haskell

I have two function, both have to have unique elements, one with Integers and the other a String who return a list of Char. My runtime is currently quite high and I’m not allowed to use imports, but I was wondering how to improve them:

uniqueInt:: (Int) -> (Int)
uniqueInt() = ()
uniqueInt xs = (x | (x,y) <- zip xs (0..), x `notElem` (take y xs))

Example:
uniqueInt (1,2,1,3,1,2) = (1,2,3)

uniqueString :: String -> (Char)
uniqueString ""=()
uniqueString xs = (x | (x,y) <- zip xs (0..), x `notElem` (take y xs))

Example: uniqueString “hello”=”helo”

c# – Unity Runtime Surface Snapping (Like Shift+Cntrl in Editor)

I am trying to allow the user of my VR game to move objects around using a pointer, then when holding down a button, snap to a grid and align to the highest surface, essentially replicating the Editor behavior or holding down shift+control and moving around a transform gizmo. (Try it, it’s fun!)

I have the grid part down, but can’t wrap my head around how to do the surface snapping.
Here is the code I have so far. I would appreciate any and all help!

 
// offsetPos is where the VR pointer is.
Vector3 offsetPos = pointer.objectControlPoint.transform.position + cursorOffset;
Vector3 newPos;
 
if (isSnapping) // Snap to ground code.
{
// I read previously to do this up then down thing, but It's not working as expected
    RaycastHit groundHit = new RaycastHit();
    if (Physics.Raycast(selectedObject.transform.position, Vector3.down, out groundHit))
    {
         RaycastHit objectHit = new RaycastHit();
         if (Physics.Raycast(groundHit.point, Vector3.up, out objectHit))
         {
                Vector3 snapDiff = groundHit.point - objectHit.point;
                snapYPos = snapDiff.y + (selectedObject.collider.bounds.extents.y);
          }
    }
      // worldGrid is a monoBehavior on another object, and gridCellSize is just a float
      float gridPosX = Mathf.Floor(offsetPos.x / worldGrid.gridCellSize) * worldGrid.gridCellSize;
      float gridPosZ = Mathf.Floor(offsetPos.z / worldGrid.gridCellSize) * worldGrid.gridCellSize;
      newPos = new Vector3(gridPosX, snapYPos, gridPosZ); // Sets the target position to the nearest grid cell, with a Y of the snap position.
}
else // If not in snap mode, set target position to just the VR cursor.
{
       newPos = new Vector3(offsetPos.x, offsetPos.y, offsetPos.z);
}
 
// I know lerp might not be the most efficient, but I like the smooth effect, and it looks good when snapping to the grid...
selectedObject.transform.position = Vector3.Lerp(selectedObject.transform.position, newPos, movementLerpSpeed * Time.deltaTime);

The objects I need to be snapping all are different sizes and have their origins in different places, but all do have appropriate box colliders.

Thanks so much!

time complexity – Theta bound for runtime analysis of nested while loops

I am trying to fully analyze the running time of $texttt{nestedLoops}$ in terms of $n$ with a Theta bound.

The Java code I have is as follows:

public void nestedLoops(int n) {
     int i = 1;
     while (i < n) {
          int j = i;
          while (j > 1) {
               int k = 0;
               while (k < n) {
                    k += 2;
               }
               j = j // 2
          }
          i *= 2
     }
}

I know that the innermost while loop has an obvious runtime of $lceil frac{n}{2} rceil$.
But I get stuck on the next while loops. I think the middle while loop has a runtime of $lfloor
log_2texttt{i} rfloor$
, but that is very confusing for me.

Any help would be taken with much gratitude.

root access – Android runtime permission’s internal working?

I was checking Android runtime permissions and was very curious about how Android runtime permissions work. Like if some application is asking for access to external storage, it will ask for permission, user will be presented with a dialog, and after pressing allow button, app will be granted read/write external storage. I just want to know what happens internally when user presses allow button on the dialog, what are the changes internally occur in Android.

I know permissions are stored in different files like /data/system/packages.list, /data/system/packages.xml and /data/system/users/0/runtime-permissions.xml but changing them manually does not effect app’s permission preference. So what actually happens when user give permission to certain app, what corresponding file gets updated?

Can a start a Twilio Conference via API or through a runtime function?

I’m building a Conference Call moderation console (in Salesforce). There will be many attendees joining a conference, either by dialing in or by our outbound autocalling. There will also be up to 3 moderators joining.

The client does NOT want the conference to start automatically when the moderators enter. Rather, they want it on a button to “Start Conference”. From the docs, I see how to start the conference when a moderator joins (startConferenceOnEnter), but can’t find any instructions where I can just HTTP POST or use a Twilio runtime function to start the conference. Am I missing it?

unity – How can I apply the changes in runtime?

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class AnimFlyertransportationController : MonoBehaviour
{
    public GameObject JetFlame;
    public bool turnOffOn = false;

    // Start is called before the first frame update
    void Start()
    {
        TurnAllAnimtorsOff(transform, turnOffOn);

        if (turnOffOn)
        {
            JetFlame.SetActive(false);
        }
        else
        {
            JetFlame.SetActive(true);
        }
    }

    // Update is called once per frame
    void Update()
    {

    }

    private void TurnAllAnimtorsOff(Transform root, bool onOff)
    {
        Animator() animators = root.GetComponentsInChildren<Animator>();

        foreach (Animator a in animators)
        {
            if (onOff)
            {
                a.enabled = false;
            }
            else
            {
                a.enabled = true;
            }
        }
    }
}

I want that if in runtime I change the turnOffOn flag state apply the changes in real time in the Update for all Animators and also for the JetFlame. Now it’s only aply the changes in the editor before running the game.