c – Portable Build System for Virtual Machine with Editor and Unit Tests

I am automating the building and unit testing of a personal
project
using shell scripts
, CMake and make on the latest version of Fedora Linux. I have also tested building on the latest version of Ubuntu. I had to decrease the minimum CMake version on Ubuntu to make it work. Parts of the unit testing have previously been reviewed on Code Review
A,
B,
C,
C2.

My original development environment was/is Visual Studio 2019 on Windows 10 Pro, however, to make it easier to get reviews and to create a portable system and application I have developed this build system as well.

It is possible that I could have used CMake for the entire build system, but one of the requirements for this system is that each unit test can build as a separate unit test as well as being combined into other unit tests for regression testing purposes. Each unit test needs to stand on its own because I am using the unit tests to debug the core code, as
well as unit test it. Using only CMake created only one object and binary tree and that was not the intention.

The unit tests themselves are not automated yet, that is the next step in the project. There are currently 2 unit tests that have been completed, the lexical analyzer and the parser. All the other unit tests are an empty shell at this point.

Requirements:

  1. Build on any system that supports the original Borne Shell and CMake.
  2. Build the unit tests as individual unit tests and as a single unit test that runs all the previous unit tests.
  3. Use regression testing in each progressive unit test to make sure the new code doesn’t break the previous functionality.
  4. Build the primary application after all the unit tests have been built.

What I want out of this review:

  1. I have tested the build on Fedora and Ubuntu, I would appreciate if someone test the build on Mac OSX, my Mac died 3 years ago.
  2. It’s been a long time since I’ve written shell scripts (at least 6 years and really much longer than that for complex shell scripts).
    1. Do my shell scripts follow best practices?
    2. How can I improve them?
    3. Do you see any portability problems with them?
  3. I’ve never written CMake scripts before, all suggestions will be helpful.
  4. It may be that this last request is off-topic, but how could I build this on Windows 10 using the scripts and CMake? That would make the build system truly portable.

You can review only the shell scripts or only the CMake code if you prefer. The shell scripts are first follow by 3 CMakeLists.txt files.

Build Directory Structure and Build Files

VMWithEditor  
    buildAll.sh  
    buildClean.sh  

    VMWithEditor/VMWithEditor:
        buildDebug.sh
        buildRelease.sh
        CMakeLists.txt

        VMWithEditor/VMWithEditor/UnitTests:
            buildAllDebug.sh
            buildAllRelease.sh

            VMWithEditor/VMWithEditor/UnitTests/CommandLine_UnitTest/CommandLine_UnitTest:
                buildDebug.sh
                buildRelease.sh
                CMakeLists.txt

            VMWithEditor/VMWithEditor/UnitTests/Common_UnitTest_Code:
                CodeReview.md
                unit_test_logging.c
                UTL_unit_test_logging.h

            VMWithEditor/VMWithEditor/UnitTests/ControlConsole_UnitTest/ControlConsole_UnitTest:
                buildDebug.sh
                buildRelease.sh
                CMakeLists.txt

            VMWithEditor/VMWithEditor/UnitTests/Editor_UnitTest/Editor_UnitTest:
                buildDebug.sh
                buildRelease.sh
                CMakeLists.txt

            VMWithEditor/VMWithEditor/UnitTests/HRF_UnitTest/HRF_UnitTest:
                buildDebug.sh
                buildRelease.sh
                CMakeLists.txt

            VMWithEditor/VMWithEditor/UnitTests/Parser_Unit_Test/Parser_Unit_Test:
                buildDebug.sh
                buildRelease.sh
                CMakeLists.txt

            VMWithEditor/VMWithEditor/UnitTests/RunAllUnitTests/RunAllUnitTests:
                buildDebug.sh
                buildRelease.sh
                CMakeLists.txt

            VMWithEditor/VMWithEditor/UnitTests/State_Machine_Unit_Test/State_Machine_Unit_Test:
                buildDebug.sh
                buildRelease.sh
                CMakeLists.txt

            VMWithEditor/VMWithEditor/UnitTests/VirtualMachine_UnitTest/VirtualMachine_UnitTest:
                buildDebug.sh
                buildRelease.sh
                CMakeLists.txt

I am presenting the shell scripts first and then the CMakeLists.txt files.

Top Shell Script Level Code

VMWithEditor/buildAll.sh

#! /usr/bin/sh
#
# Build the input version of the Virtual MAchine and all the unit tests
# Stop on any build errors.
#
if ( -z "$1" ) ; then
    echo "Usage: build.sh BUILDTYPE where BUILDTYPE is Debug or Release."
    exit 1
elif ( "$1" != 'Debug' ) && ( "$1" != 'Release' ) ; then
    printf "n unknow build type %s n" "$1"
    exit 1
fi
#
# Build the necessary variables
#
BUILDTYPE="$1"
UNITTESTDIRECTORY="./VMWithEditor/UnitTests"
SHELLFILE="buildAll${BUILDTYPE}.sh";
VMSHELLFILE="build${BUILDTYPE}.sh";
FULLSPECSHELLFILE="${UNITTESTDIRECTORY}/${SHELLFILE}";
LOGFILE="build${BUILDTYPE}log.txt"
#
# Execute the build scripts
#
# Build The Unit Tests
#
if ( -d "${UNITTESTDIRECTORY}" ) ; then
    if ( -f "${FULLSPECSHELLFILE}" ) ; then
        echo "Building $UNITTESTDIRECTORY";
    cd "${UNITTESTDIRECTORY}" || exit
        ./"${SHELLFILE}" > "${LOGFILE}" 2>&1 
        retVal=$?
        if ( $retVal -ne 0 ); then
            echo "Unit Test Build Failed!"
            exit $retVal
        fi
        cd ../ || exit
    fi
#
# Build the Virtual Machine with Editor
#
    if ( -f "./buildDebug.sh" ) ; then
        ./"${VMSHELLFILE}" > "${LOGFILE}" 2>&1
        retVal=$?
        if ( ${retVal} -ne 0 ); then
            echo "Virtual Machine With Editor Build Failed!"
            echo "Check logs for details"
            exit ${retVal}
        else
            printf "%s Version Virtual Machine With Editor Build and Unit Test Build Completed!n" "${BUILDTYPE}"
            exit 0
        fi
    fi
fi

VMWithEditor/buildClean.sh

#! /usr/bin/bash
#
# Build the release version of the Virtual Machine and all the unit tests
# Stop on any build errors.
#
UNITTESTDIRECTORY="./VMWithEditor/UnitTests"
if ( -d "$UNITTESTDIRECTORY" ) ; then
    cd "$UNITTESTDIRECTORY" || exit
    make clean
    retVal=$?
    if ( $retVal -ne 0 ); then
        exit $retVal
    fi
    cd ../ || exit
    make clean
fi

Middle Layer Shell Scripts

The 2 following shell scripts are in the UnitTests directory:

buildAllDebug.sh

#! /usr/bin/bash

# Build the debug version of all the unit tests
# Stop on any build errors.

for i in * 
do
    if ( -d $i ) ; then
        TESTDIRECTORY="$i/$i"
        SHELLFILE="$TESTDIRECTORY/buildDebug.sh";
        if ( -f $SHELLFILE ) ; then
            echo "Building $TESTDIRECTORY";
        cd "$TESTDIRECTORY"
            ./buildDebug.sh >& buildDebuglog.txt
            retVal=$?
            if ( $retVal -ne 0 ); then
                exit $retVal
            fi
            cd ../..
        fi
    fi
done;

buildAllRelease.sh

#! /usr/bin/bash

# Build the debug version of all the unit tests
# Stop on any build errors.

for i in * 
do
    if ( -d $i ) ; then
        TESTDIRECTORY="$i/$i"
        SHELLFILE="$TESTDIRECTORY/buildRelease.sh";
        if ( -f $SHELLFILE ) ; then
            echo "Building $TESTDIRECTORY";
        cd "$TESTDIRECTORY"
            ./buildRelease.sh >& buildReleaselog.txt
            retVal=$?
            if ( $retVal -ne 0 ); then
                exit $retVal
            fi
            cd ../..
        fi
    fi
done;

Lowest Level Shell Scripts

The following 2 shell scripts are in all the unit test directories where cmake is executed, the first builds a debugable version the second builds an optimized release version.

buildDebug.sh

#! /bin/sh

# Creat a Debug build directory and then build the target within the Debug directory
# Stop on any build errors and stop the parent process.

mkdir Debug
cd Debug || exit
cmake -DCMAKE_BUILD_TYPE=Debug ..
retVal=$?
if ( $retVal -ne 0 ); then
    printf "nncmake failed %s!nn" "$retVal"
    exit $retVal
fi
make VERBOSE=1
retVal=$?
if ( $retVal -ne 0 ); then
    printf "nnmake failed! %snn" "$retVal"
    exit $retVal
fi

buildRelease.sh

#! /bin/sh

# Creat a Release build directory and then build the target within the Release directory
# Stop on any build errors and stop the parent process.

mkdir Release
cd Release || exit
cmake -DCMAKE_BUILD_TYPE=Release ..
retVal=$?
if ( $retVal -ne 0 ); then
    printf "nncmake failed %s!nn" "$retVal"
    exit $retVal
fi
make
retVal=$?
if ( $retVal -ne 0 ); then
    printf "nnmake failed! %snn" "$retVal"
    exit $retVal
fi

There are 2.3 unit tests that actually test the existing code and one unit test that includes all the other unit tests which is working to the extent that the two existing unit tests work (testing is successful for all three tests). The first 2 CMake files presented are the lexical analyzer unit test and the parser unit test. The lexical analyzer unit test is fully complete and was used to debug the lexical analyzer. The parser unit test is complete, it executes the lexical analyzer unit tests prior to executing the parser unit tests. The parser unit test was used to debug the parser code in the main project.

The Lexical Analyzer Unit Test CMakeLists.txt file:

cmake_minimum_required(VERSION 3.16.1)

set(EXECUTABLE_NAME "Lexical_Analyzer_Unit_Test.exe")

project(${EXECUTABLE_NAME} LANGUAGES C VERSION 1.0)

if("${CMAKE_BUILD_TYPE}" STREQUAL "Debug")
    set(GCC_WARN_COMPILE_FLAGS  " -Wall ")
    set(CMAKE_C_FLAGS  "${CMAKE_CXX_FLAGS} ${GCC_WARN_COMPILE_FLAGS}")
endif()

set(VM_SRC_DIR "../../..")
set(COMMON_TEST_DIR "../../Common_UnitTest_Code")

add_executable(${EXECUTABLE_NAME} internal_character_transition_unit_tests.c internal_sytax_state_tests.c lexical_analyzer_test_data.c lexical_analyzer_unit_test_main.c lexical_analyzer_unit_test_utilities.c ${VM_SRC_DIR}/error_reporting.c ${VM_SRC_DIR}/lexical_analyzer.c ${VM_SRC_DIR}/safe_string_functions.c ${COMMON_TEST_DIR}/unit_test_logging.c)

set(CMAKE_C_STANDARD 99)
set(CMAKE_C_STANDARD_REQUIRED True)

configure_file(VMWithEditorConfig.h.in VMWithEditorConfig.h)

target_compile_definitions(${EXECUTABLE_NAME} PUBLIC UNIT_TESTING)
target_compile_definitions(${EXECUTABLE_NAME} PUBLIC LEXICAL_UNIT_TEST_ONLY)
target_include_directories(${EXECUTABLE_NAME} PUBLIC "${PROJECT_BINARY_DIR}")
target_include_directories(${EXECUTABLE_NAME} PRIVATE "${VM_SRC_DIR}")
target_include_directories(${EXECUTABLE_NAME} PRIVATE "${COMMON_TEST_DIR}")

The Parser Unit Test CMakeLists.txt file:

cmake_minimum_required(VERSION 3.16.1)

set(EXECUTABLE_NAME "Parser_Unit_Test.exe")

project(${EXECUTABLE_NAME} LANGUAGES C VERSION 1.0)

if("${CMAKE_BUILD_TYPE}" STREQUAL "Debug")
    set(GCC_WARN_COMPILE_FLAGS  " -Wall ")
    set(CMAKE_C_FLAGS  "${CMAKE_CXX_FLAGS} ${GCC_WARN_COMPILE_FLAGS}")
endif()

set(VM_SRC_DIR "../../..")
set(LEXICAL_TEST_DIR "../../State_Machine_Unit_Test/State_Machine_Unit_Test")
set(COMMON_TEST_DIR "../../Common_UnitTest_Code")

add_executable(${EXECUTABLE_NAME} internal_parser_tests.c  parser_unit_test.c  parser_unit_test_main.c ${VM_SRC_DIR}/error_reporting.c ${VM_SRC_DIR}/human_readable_program_format.c ${VM_SRC_DIR}/lexical_analyzer.c ${VM_SRC_DIR}/opcode.c ${VM_SRC_DIR}/parser.c ${VM_SRC_DIR}/safe_string_functions.c  ${VM_SRC_DIR}/virtual_machine.c ${COMMON_TEST_DIR}/unit_test_logging.c ${LEXICAL_TEST_DIR}/internal_character_transition_unit_tests.c ${LEXICAL_TEST_DIR}/internal_sytax_state_tests.c ${LEXICAL_TEST_DIR}/lexical_analyzer_test_data.c ${LEXICAL_TEST_DIR}/lexical_analyzer_unit_test_main.c ${LEXICAL_TEST_DIR}/lexical_analyzer_unit_test_utilities.c)

set(CMAKE_C_STANDARD 99)
set(CMAKE_C_STANDARD_REQUIRED True)

configure_file(VMWithEditorConfig.h.in VMWithEditorConfig.h)

target_compile_definitions(${EXECUTABLE_NAME} PUBLIC UNIT_TESTING)
target_compile_definitions(${EXECUTABLE_NAME} PUBLIC PARSER_UNIT_TEST_ONLY)
target_include_directories(${EXECUTABLE_NAME} PUBLIC "${PROJECT_BINARY_DIR}")
target_include_directories(${EXECUTABLE_NAME} PRIVATE "${VM_SRC_DIR}")
target_include_directories(${EXECUTABLE_NAME} PRIVATE "${COMMON_TEST_DIR}")
target_include_directories(${EXECUTABLE_NAME} PRIVATE "${LEXICAL_TEST_DIR}")

The RunAllUnitTests CMakeLists.txt file:

This file is the most complex of all the CMakeLists.txt files. It includes code from 7 other unit tests.

cmake_minimum_required(VERSION 3.16.1)

set(EXECUTABLE_NAME "Run_All_Unit_Tests.exe")

project(${EXECUTABLE_NAME} LANGUAGES C VERSION 1.0)

if("${CMAKE_BUILD_TYPE}" STREQUAL "Debug")
    set(GCC_WARN_COMPILE_FLAGS  " -Wall ")
    set(CMAKE_C_FLAGS  "${CMAKE_CXX_FLAGS} ${GCC_WARN_COMPILE_FLAGS}")
endif()

set(VM_SRC_DIR "../../..")
set(COMMON_TEST_DIR "../../Common_UnitTest_Code")
set(LEXICAL_TEST_DIR "../../State_Machine_Unit_Test/State_Machine_Unit_Test")
set(PARSER_TEST_DIR "../../Parser_Unit_Test/Parser_Unit_Test")
set(CMD_LINE_TEST_DIR "../../CommandLine_UnitTest/CommandLine_UnitTest")
set(HRF_TEST_DIR "../../HRF_UnitTest/HRF_UnitTest")


add_executable(${EXECUTABLE_NAME}
run_all_unit_tests_main.c
${HRF_TEST_DIR}/hrf_unit_test_main.c
${HRF_TEST_DIR}/unit_test_human_readable_program_format.c
${LEXICAL_TEST_DIR}/lexical_analyzer_unit_test_main.c 
${LEXICAL_TEST_DIR}/internal_character_transition_unit_tests.c
${LEXICAL_TEST_DIR}/internal_sytax_state_tests.c
${LEXICAL_TEST_DIR}/lexical_analyzer_test_data.c
${LEXICAL_TEST_DIR}/lexical_analyzer_unit_test_utilities.c
${VM_SRC_DIR}/error_reporting.c  
${VM_SRC_DIR}/safe_string_functions.c
${VM_SRC_DIR}/arg_flags.c
${VM_SRC_DIR}/file_io_vm.c
${VM_SRC_DIR}/opcode.c
${VM_SRC_DIR}/parser.c 
${VM_SRC_DIR}/default_program.c
${VM_SRC_DIR}/human_readable_program_format.c
${VM_SRC_DIR}/lexical_analyzer.c 
${VM_SRC_DIR}/virtual_machine.c 
${PARSER_TEST_DIR}/parser_unit_test_main.c
${PARSER_TEST_DIR}/internal_parser_tests.c
${PARSER_TEST_DIR}/parser_unit_test.c
${CMD_LINE_TEST_DIR}/command_line_unit_test_main.c
${VM_SRC_DIR}/error_reporting.c
${VM_SRC_DIR}/arg_flags.c
${VM_SRC_DIR}/safe_string_functions.c
${COMMON_TEST_DIR}/unit_test_logging.c
)

set(CMAKE_C_STANDARD 99)
set(CMAKE_C_STANDARD_REQUIRED True)

configure_file(VMWithEditorConfig.h.in VMWithEditorConfig.h)

target_compile_definitions(${EXECUTABLE_NAME} PUBLIC UNIT_TESTING)
target_compile_definitions(${EXECUTABLE_NAME} PUBLIC ALL_UNIT_TESTING)
target_include_directories(${EXECUTABLE_NAME} PUBLIC "${PROJECT_BINARY_DIR}")
target_include_directories(${EXECUTABLE_NAME} PRIVATE "${VM_SRC_DIR}")
target_include_directories(${EXECUTABLE_NAME} PRIVATE "${COMMON_TEST_DIR}")
target_include_directories(${EXECUTABLE_NAME} PRIVATE "${LEXICAL_TEST_DIR}")
target_include_directories(${EXECUTABLE_NAME} PRIVATE "${CMD_LINE_TEST_DIR}")
target_include_directories(${EXECUTABLE_NAME} PRIVATE "${PARSER_TEST_DIR}")
target_include_directories(${EXECUTABLE_NAME} PRIVATE "${HRF_TEST_DIR}")

graphics – Can a LatticeData image show more than a unit cell?

I have an interactive code for showing the crystallographic in Mathematica.

LatticeData[ Which[Type == "FCC", "FaceCenteredCubic", Type == "BCC", "BodyCenteredCubic", Type == "SC", "SimpleCubic"], "Image"]

However this shows only one unit, which means the planes that don’t fit in the cell won’t be shown. Is there a easy way to show more than one unit cell in the LatticeData image?

server – Ubuntu 20.04 LTS systemd start up script is not working “bad unit file setting”

I’ve set up a minecraft server, i’ve made a start script which works when I execute it. I’ve made a .service in systemd that should start the script when the server starts but am having issues with the systemd part of it. Here is what I have.

[Unit]
Description=DeadSky minecraft server
After=network-online.target

[Service]
User=mcadmin
ExecStart=/home/mcadmin/MCServer/ ./start.sh

[Install]
WantedBy=multi-user.target

please help

Java unit test Mock simulate file

I made a little game called Space Invaders, a working example you can find here: https://github.com/Koffiemolen/SpaceInvaders.git Go for the second branch, SpaceInvadersV2.

In the module dataaccess I created a Class HighscoreStore. This reads and writes to a file.
The class GamePanel in the module userinterface uses this class.

I want to test layer separation, so I want to make a mock for it. So the unit test needs to feed the GamePanel class with data but it needs to be generated in the unit test so I can test without using the file that contains the highscore.

I have the feeling that I’m going in circles and can’t get started. Do I need to make a copy of the GamePanel class and call it MockGamePanel, create a mock class from HighscoreStore called MockHighscore class that implements the same interface that HighscoreStore uses and let it generate data?

The interface is in the moduel business layer, package dataprovider.interfaces, HighscoreProvider

package dataprovider.interfaces;

import logic.entities.Highscore;

public interface HighscoreProvider {
    Highscore getHighScore();
    void setHighScore(Highscore highscore);
}

Or is it the other way around?

unit testing – What to test on a rest API?

I created a rest controller with Spring Boot, I am trying to learn what I should write on my tests, right now I only check status codes and keys existence. I am planning to build an API to showcase at interviews.

I would like you to tell me what do you think of my tests class and what I should add. I will use your advises to write the rest of the API tests.

Rest Controller

    @RestController
    @RequestMapping("/countries")
    public class CountryController {
    
        private final CountryRepository countryRepository;
    
        private final CountryModelAssembler countryModelAssembler;
    
        public CountryController(CountryRepository countryRepository, CountryModelAssembler countryModelAssembler) {
            this.countryRepository = countryRepository;
            this.countryModelAssembler = countryModelAssembler;
        }
    
        @GetMapping("/")
        public CollectionModel<EntityModel<Country>> getCountries() {
            List<EntityModel<Country>> countries = this.countryRepository.findAll()
                    .stream()
                    .map(this.countryModelAssembler::toModel)
                    .collect(Collectors.toList());
            return CollectionModel.of(countries, linkTo(methodOn(CountryController.class).getCountries()).withSelfRel());
        }
    
        @GetMapping("/{id}")
        public EntityModel<Country> getCountry(@PathVariable Long id) {
            return this.countryRepository.findById(id).map(this.countryModelAssembler::toModel)
                    .orElseThrow(() -> new CountryNotFoundException(id));
        }
    
        @PostMapping("/")
        public ResponseEntity<?> saveCountry(@RequestBody Country country) {
            EntityModel<Country> entityModel = this.countryModelAssembler.toModel(this.countryRepository.save(country));
            return ResponseEntity.created(entityModel.getRequiredLink(IanaLinkRelations.SELF).toUri())
                    .body(entityModel);
        }
    
        @PutMapping("/{id}")
        public ResponseEntity<?> editCountry(@RequestBody Country country, @PathVariable Long id) {
            Country updatedCountry = this.countryRepository.findById(id).map(mappedCountry -> {
                mappedCountry.setName(country.getName());
                return this.countryRepository.save(mappedCountry);
            }).orElseGet(() -> {
                    country.setId(id);
                    return this.countryRepository.save(country);
            });
            EntityModel<Country> entityModel = this.countryModelAssembler.toModel(updatedCountry);
            return ResponseEntity.created(entityModel.getRequiredLink(IanaLinkRelations.SELF).toUri())
                    .body(entityModel);
        }
    
        @DeleteMapping("/{id}")
        public ResponseEntity<?> deleteCountry(@PathVariable Long id) {
            this.countryRepository.deleteById(id);
            return ResponseEntity.noContent().build();
        }
    
    }

Tests

@SpringBootTest
public class CountryControllerTest {

    @BeforeEach
    public void init() {
        RestAssured.baseURI = "http://127.0.0.1:8085/countries";
    }

    @Test
    public void getCountries() {
        Response getResponse = RestAssured
                .when()
                .get("/")
                .then()
                .extract()
                .response();

        getResponse.prettyPrint();

        getResponse
                .then()
                .assertThat()
                .statusCode(HttpStatus.OK.value());

    }

    @Test
    public void saveCountry() throws JSONException {
        JSONObject country = new JSONObject();
        country.put("name", "Honduras");

        Response postResponse = RestAssured
                .given()
                .contentType(ContentType.JSON)
                .body(country.toString())
                .when()
                .post("/")
                .then()
                .extract()
                .response();

        postResponse.prettyPrint();

        postResponse
                .then()
                .assertThat()
                .statusCode(HttpStatus.CREATED.value())
                .body("$", Matchers.hasKey("id"))
                .body("$", Matchers.hasKey("name"));
    }

    @Test
    public void getCountry() throws JSONException {
        JSONObject country = new JSONObject();
        country.put("name", "Panama");

        Response postResponse = RestAssured
                .given()
                .contentType(ContentType.JSON)
                .body(country.toString())
                .when()
                .post("/")
                .then()
                .extract()
                .response();

        postResponse.prettyPrint();
        System.out.println("****************");

        postResponse
                .then()
                .assertThat()
                .statusCode(HttpStatus.CREATED.value())
                .body("$", Matchers.hasKey("id"))
                .body("$", Matchers.hasKey("name"));

        String jsonResponse = postResponse
                .getBody()
                .asString();

        String selfPath = new JSONObject(jsonResponse)
                .getJSONObject("_links")
                .getJSONObject("self")
                .get("href")
                .toString();

        Response getResponse = RestAssured
                .when()
                .get(selfPath)
                .then()
                .extract()
                .response();

        getResponse.prettyPrint();

        getResponse
                .then()
                .statusCode(HttpStatus.OK.value())
                .body("$", Matchers.hasKey("id"))
                .body("$", Matchers.hasKey("name"));
    }

    @Test
    public void editCountry() throws JSONException {
        JSONObject country = new JSONObject();
        country.put("name", "Costa Rica");

        Response postResponse = RestAssured
                .given()
                .contentType(ContentType.JSON)
                .body(country.toString())
                .when()
                .post("/")
                .then()
                .extract()
                .response();

        postResponse.prettyPrint();
        System.out.println("****************");

        postResponse
                .then()
                .assertThat()
                .statusCode(HttpStatus.CREATED.value())
                .body("$", Matchers.hasKey("id"))
                .body("$", Matchers.hasKey("name"));

        String jsonPostResponse = postResponse
                .getBody()
                .asString();

        String selfReference = new JSONObject(jsonPostResponse)
                .getJSONObject("_links")
                .getJSONObject("self")
                .get("href")
                .toString();

        String newName = "Guatemala";

        country.put("name", newName);

        Response putResponse = RestAssured
                .given()
                .contentType(ContentType.JSON)
                .body(country.toString())
                .when()
                .put(selfReference)
                .then()
                .extract()
                .response();

        putResponse.prettyPrint();

        putResponse
                .then()
                .assertThat()
                .statusCode(HttpStatus.CREATED.value())
                .body("$", Matchers.hasKey("id"))
                .body("$", Matchers.hasKey("name"))
                .body("name", Matchers.equalTo(newName));
    }

    @Test
    public void deleteCountry() throws JSONException {
        JSONObject country = new JSONObject();
        country.put("name", "Costa Rica");

        Response postResponse = RestAssured
                .given()
                .contentType(ContentType.JSON)
                .body(country.toString())
                .when()
                .post("/")
                .then()
                .extract()
                .response();

        postResponse
                .then()
                .assertThat()
                .statusCode(HttpStatus.CREATED.value())
                .body("$", Matchers.hasKey("id"))
                .body("$", Matchers.hasKey("name"));

        postResponse.prettyPrint();

        String jsonPostResponse = postResponse
                .getBody()
                .asString();

        String selfReference = new JSONObject(jsonPostResponse)
                .getJSONObject("_links")
                .getJSONObject("self")
                .get("href")
                .toString();

        Response deleteResponse = RestAssured
                .when()
                .delete(selfReference)
                .then()
                .extract()
                .response();

        deleteResponse
                .then()
                .assertThat()
                .statusCode(HttpStatus.NO_CONTENT.value());
    }

}

c++ – How to Unit test / design differently a complicated free function

I have written a Command Line Interface, where the user has to construct an object basically by providing input to a bunch of questions. I have a hard time testing these functions as there is too much happening in there. Basically for every input, there is some validation and it will loop forever, print error message, asks again until the user enters a correct input.

A quite simplified case might look something like this:

// CommandLineInterface.h
void createPerson(DatabaseClass database, std::ostream ostream, std::istream istream)

// CommandLineInterface.cpp
namespace {
std::string getPersonNameInput(std::ostream ostream, std::istream istream) {
   while(true) {
     ostream << "Enter Person Name";
     std::string name;
     istream >> name; 

     if(someOtherFunctionToValidateName(name)) 
        return name; 

     ostream << "Some error message";
   }
}
}

void createPerson(DatabaseClass database, std::ostream ostream, std::istream istream) {
    auto name = getPersonNameInput(ostream, istream); 
    auto age = getPersonAgeInput(); 
    database.addPerson(Person { name, age }); 
}

So there is one function part of the public interface, which delegates input, error handling, validation to some helper function in an anonymous namespace.

I’ve learnt that you shouldn’t test Implementation Details (such as functions in an anonymous namespace or private functions), but only test the public Interface, which will call these directly. But I also learnt to test only one noticeable end result per unit (the big end result here is the successful call of some function with the constructed object … but there are loads of other noticeable results such as the error messages).
This might be an indicator that my function does to many things and does not separate concerns.

One “fix” would be to put getPersonNameInput in the header as well and make it part of the public interface and then unit test separately. I could then test createPerson by mocking this function.
But that seems wrong to me as well. Making helper functions part of the public interface.

Is my design just bad here? If yes what would be ways to do improve the design, separate concerns, make it more testable?
If no how would I test it then best? (Btw: I now that it’s sometimes possible to test private functions or function in anonymous namespaces, but as said above you usually would not want to test these)

Thanks for help!

Android Car Unit – Touchscreen not working after ignition off and reboot

I have an Android 9 radio from Klyde (CSN2_D, PX5, Rockchip rk3368). There is a power-saving feature that cuts most of power consumption after 30 sec. when ignition is off (do you call this deep sleep?). Technically yellow main power cable stays connected, red ignition power cable is cut. When I start the car the unit reboots.

My problem: Touchscreen does not work after boot. This happens only with this mode. Forcing reboot from the unit or from stock recovery or rebooting after all power cables were cut works fine. Same problem when unit is removed from the car, so it is not a wiring issue.

The problem occurrs since I tried a different launcher and removed it. It looks like this launcher changed some settings which remained permanently saved, because FM radio stations that were stored at that moment are permanently stored in the unit as well.

I tried a lot:

  • Firmware update Android 9
  • Upgrade to Android 10
  • MCU update with different versions
  • Factory and cache reset
  • Complete new installation with Rockchips’ AndroidTool after forcing unit in Maskrom mode
  • Erase flash with AndroidTool and new installation

I saved dmesg.log-files and here is the difference:

Working:
( 0.907815) of_get_named_gpiod_flags: parsed ‘touch-gpio’ property of node ‘/i2c@ff140000/gt9xx_main@14(0)’ – status (0)
( 0.907841) of_get_named_gpiod_flags: parsed ‘reset-gpio’ property of node ‘/i2c@ff140000/gt9xx_main@14(0)’ – status (0)
( 1.004583) Goodix-TS 2-0014: GTP X_MAX: 1024, Y_MAX: 600, TRIGGER: 0x01
( 1.016532) GTP Sensor_ID: 0
( 1.016554) ## GTP: 1024×600 customer touch type: -1 ; customer type1125 index: -1
( 1.016792) input: goodix-ts as /devices/platform/ff140000.i2c/i2c-2/2-0014/input/input1

No touchscreen:
( 0.909817) of_get_named_gpiod_flags: parsed ‘touch-gpio’ property of node ‘/i2c@ff140000/gt9xx_main@14(0)’ – status (0)
( 0.909843) of_get_named_gpiod_flags: parsed ‘reset-gpio’ property of node ‘/i2c@ff140000/gt9xx_main@14(0)’ – status (0)
( 0.995911) <<-GTP-ERROR->> I2C Read: 0x8140, 6 bytes failed, errcode: -6! Process reset.
( 1.071450) <<-GTP-ERROR->> GTP read version failed
( 1.123893) <<-GTP-ERROR->> I2C Read: 0x8140, 6 bytes failed, errcode: -6! Process reset.
( 1.168453) usb 2-1: new high-speed USB device number 2 using ehci-platform
( 1.199449) <<-GTP-ERROR->> GTP read version failed
( 1.199471) <<-GTP-ERROR->> Read version failed.

dmesg.log (crashed): https://drive.google.com/file/d/1N8TWXrFoGxvPumhxUAozXotdn2EDmfh2/view?usp=sharing

dmesg.log (working): https://drive.google.com/file/d/19i8fcykAAb2a-dYeJWyz4hvnSXHil0wK/view?usp=sharing

Any ideas what happened and how I can solve the problem?

Thanks a lot,
Claudius

terminology – Term for the unit of grouping large numbers?

In English and probably most (if not all) western languages, we group numbers by powers of 1000. So we have:
ones, tens, hundreds – then
thousands, ten-thousands, hundred-thousands – and so on.

We may take this for granted but it’s not universal. In Japanese, for example, we have something like:
ichi (ones), jyuu (tens), hyaku (hundreds), sen (thousands) – then
man (ten-thousands), jyuu-man (hundred-thousands), hyaku-man (millions), sen-man (ten-millions) – then
oku (hundred-millions), jyuu-oku (billions), hyaku-oku (ten-billions), sen-oku (hundred-billions)

So my question is: is there a term for this grouping? Can we say something like, “The (term) in english is 10^3 but the (term) in Japanese is 10^4.”

terminology – Is there a symbol to indicate a fraction normalized to the unit interval [0 to 1]?

I’m looking for a symbol/character that quickly conveys to the reader that the number that follows is to be understood as a decimal fraction in the range from 0 (minimum) to 1 (maximum). So for example “humidity: 0.4<symbol>” would represent the same idea as “humidity: 40%”.

This would be analogous to the % (per cent) and ‰ (per mille) signs. When you read a table column labeled as “% humidity” or “% unemployment” you understand well and immediately that the numbers that follow below are relative to the range 0 to 100.

I want to use this in in table headers, function annotations, on UI buttons, as boilerplate text, and so forth. I.e. anywhere where there isn’t a lot of space and it’s better to be succinct. It would appear alongside other shorthands such as Σ for sum, ⌀ for average, Δ for difference, # for ordinal, etc.

For now I’m using the unit interval notation (0, 1) or (0…1) as a prefix or postfix but it’s not always well understood.

I should mention that I’m unable to modify the data itself, so I can’t just change 0.4 to 40% and use the percentage symbol instead.

c# – Is this enough for unit testing a basic controller?

My pet project, is a community driven lyrics archive, it is still a work in progress, and all code is open sourced on GitHub.

I have a local git branch tests/add-controller-tests where I wish to add some unit tests on my controllers. I have purposefully kept my controllers basic, for example here is my HTTP GET Index action on the HomeController:

namespace Bejebeje.Mvc.Controllers
{
  using System.Diagnostics;
  using System.Threading.Tasks;
  using Bejebeje.Models.Lyric;
  using Bejebeje.Services.Services.Interfaces;
  using Microsoft.AspNetCore.Mvc;
  using Models;

  public class HomeController : Controller
  {
    private readonly IArtistsService _artistsService;

    private readonly ILyricsService _lyricsService;

    public HomeController(
      IArtistsService artistsService,
      ILyricsService lyricsService)
    {
      _artistsService = artistsService;
      _lyricsService = lyricsService;
    }

    public async Task<IActionResult> Index()
    {
      IndexViewModel viewModel = new IndexViewModel();

      viewModel.Lyrics = await _lyricsService
        .GetRecentLyricsAsync();

      viewModel.FemaleArtists = await _artistsService
        .GetTopTenFemaleArtistsByLyricsCountAsync();

      return View(viewModel);
    }

    (ResponseCache(Duration = 0, Location = ResponseCacheLocation.None, NoStore = true))
    public IActionResult Error()
    {
      ErrorViewModel viewModel = new ErrorViewModel
      {
        RequestId = Activity.Current?.Id ?? HttpContext.TraceIdentifier,
      };

      return View(viewModel);
    }
  }
}

And here is my unit test (just one):

namespace Bejebeje.Mvc.Tests.Controllers
{
  using System.Collections.Generic;
  using System.Threading.Tasks;
  using Bejebeje.Models.Artist;
  using Bejebeje.Models.Lyric;
  using FluentAssertions;
  using Microsoft.AspNetCore.Mvc;
  using Moq;
  using Mvc.Controllers;
  using NUnit.Framework;
  using Services.Services.Interfaces;

  (TestFixture)
  public class HomeControllerTests
  {
    (Test)
    public async Task Index_ReturnsAViewResult_WithAnIndexViewModel()
    {
      // arrange
      IEnumerable<ArtistItemViewModel> tenFemaleArtists = new List<ArtistItemViewModel>
      {
        new ArtistItemViewModel
        {
          FirstName = "A1",
          LastName = "A1",
          ImageAlternateText = "A1",
          ImageUrl = "A1",
          PrimarySlug = "A1",
        },
        new ArtistItemViewModel
        {
          FirstName = "A2",
          LastName = "A2",
          ImageAlternateText = "A2",
          ImageUrl = "A2",
          PrimarySlug = "A2",
        },
        new ArtistItemViewModel
        {
          FirstName = "A3",
          LastName = "A3",
          ImageAlternateText = "A3",
          ImageUrl = "A3",
          PrimarySlug = "A3",
        },
        new ArtistItemViewModel
        {
          FirstName = "A4",
          LastName = "A4",
          ImageAlternateText = "A4",
          ImageUrl = "A4",
          PrimarySlug = "A4",
        },
        new ArtistItemViewModel
        {
          FirstName = "A5",
          LastName = "A5",
          ImageAlternateText = "A5",
          ImageUrl = "A5",
          PrimarySlug = "A5",
        },
        new ArtistItemViewModel
        {
          FirstName = "A6",
          LastName = "A6",
          ImageAlternateText = "A6",
          ImageUrl = "A6",
          PrimarySlug = "A6",
        },
        new ArtistItemViewModel
        {
          FirstName = "A7",
          LastName = "A7",
          ImageAlternateText = "A7",
          ImageUrl = "A7",
          PrimarySlug = "A7",
        },
        new ArtistItemViewModel
        {
          FirstName = "A8",
          LastName = "A8",
          ImageAlternateText = "A8",
          ImageUrl = "A8",
          PrimarySlug = "A8",
        },
        new ArtistItemViewModel
        {
          FirstName = "A9",
          LastName = "A9",
          ImageAlternateText = "A9",
          ImageUrl = "A9",
          PrimarySlug = "A9",
        },
        new ArtistItemViewModel
        {
          FirstName = "A10",
          LastName = "A10",
          ImageAlternateText = "A10",
          ImageUrl = "A10",
          PrimarySlug = "A10",
        }
      };

      Mock<IArtistsService> mockArtistsService = new Mock<IArtistsService>();

      mockArtistsService
        .Setup(x => x.GetTopTenFemaleArtistsByLyricsCountAsync())
        .ReturnsAsync(tenFemaleArtists);

      IEnumerable<LyricItemViewModel> tenRecentLyrics = new List<LyricItemViewModel>
      {
        new LyricItemViewModel
        {
          Title = "L1",
          LyricPrimarySlug = "L1",
          ArtistId = 1,
          ArtistName = "L1",
          ArtistPrimarySlug = "L1",
          ArtistImageUrl = "L1",
          ArtistImageAlternateText = "L1",
        },
        new LyricItemViewModel
        {
          Title = "L2",
          LyricPrimarySlug = "L2",
          ArtistId = 2,
          ArtistName = "L2",
          ArtistPrimarySlug = "L2",
          ArtistImageUrl = "L2",
          ArtistImageAlternateText = "L2",
        },
        new LyricItemViewModel
        {
          Title = "L3",
          LyricPrimarySlug = "L3",
          ArtistId = 3,
          ArtistName = "L3",
          ArtistPrimarySlug = "L3",
          ArtistImageUrl = "L3",
          ArtistImageAlternateText = "L3",
        },
        new LyricItemViewModel
        {
          Title = "L4",
          LyricPrimarySlug = "L4",
          ArtistId = 4,
          ArtistName = "L4",
          ArtistPrimarySlug = "L4",
          ArtistImageUrl = "L4",
          ArtistImageAlternateText = "L4",
        },
        new LyricItemViewModel
        {
          Title = "L5",
          LyricPrimarySlug = "L5",
          ArtistId = 5,
          ArtistName = "L5",
          ArtistPrimarySlug = "L5",
          ArtistImageUrl = "L5",
          ArtistImageAlternateText = "L5",
        },
        new LyricItemViewModel
        {
          Title = "L6",
          LyricPrimarySlug = "L6",
          ArtistId = 6,
          ArtistName = "L6",
          ArtistPrimarySlug = "L6",
          ArtistImageUrl = "L6",
          ArtistImageAlternateText = "L6",
        },
        new LyricItemViewModel
        {
          Title = "L7",
          LyricPrimarySlug = "L7",
          ArtistId = 7,
          ArtistName = "L7",
          ArtistPrimarySlug = "L7",
          ArtistImageUrl = "L7",
          ArtistImageAlternateText = "L7",
        },
        new LyricItemViewModel
        {
          Title = "L8",
          LyricPrimarySlug = "L8",
          ArtistId = 8,
          ArtistName = "L8",
          ArtistPrimarySlug = "L8",
          ArtistImageUrl = "L8",
          ArtistImageAlternateText = "L8",
        },
        new LyricItemViewModel
        {
          Title = "L9",
          LyricPrimarySlug = "L9",
          ArtistId = 9,
          ArtistName = "L9",
          ArtistPrimarySlug = "L9",
          ArtistImageUrl = "L9",
          ArtistImageAlternateText = "L9",
        },
        new LyricItemViewModel
        {
          Title = "L10",
          LyricPrimarySlug = "L10",
          ArtistId = 10,
          ArtistName = "L10",
          ArtistPrimarySlug = "L10",
          ArtistImageUrl = "L10",
          ArtistImageAlternateText = "L10",
        },
      };

      Mock<ILyricsService> mockLyricsService = new Mock<ILyricsService>();

      mockLyricsService
        .Setup(x => x.GetRecentLyricsAsync())
        .ReturnsAsync(tenRecentLyrics);

      HomeController homeController = new HomeController(mockArtistsService.Object, mockLyricsService.Object);

      // act
      IActionResult actionResult = await homeController.Index();

      // assert
      ViewResult view = actionResult.Should().BeOfType<ViewResult>().Subject;
      IndexViewModel viewModel = view.Model.Should().BeOfType<IndexViewModel>().Subject;
      viewModel.FemaleArtists.Should().HaveCount(10);
      viewModel.Lyrics.Should().HaveCount(10);
    }
  }
}

As far as tests concenrning the controller, is there anything else that I should test? Also, any other suggestions on naming or making things more readable …etc

In my test, I can extract the code that builds the lists to a method, is there anything else?