unit testing – How do I really write tests without mocking/stubbing?

I have been using TDD when developing some of my side projects and have been loving it.

The issue, however, is that stubbing classes for unit tests are a pain and makes you afraid of refactoring.

I started researching and I see that there are a group of people that advocates for TDD without mocking–the classicists, if I am not mistaken.

However, how would I go about writing unit tests for a piece of code that uses one or more dependencies? For instance, if I am testing a UserService class that needs UserRepository (talks to the database) and UserValidator (validates the user), then the only way would be… to stub them?

Otherwise, if I use a real UserRepository and UserValidator, wouldn’t that be an integration test and also defeat the purpose of testing only the behavior of UserService?

Should I be writing only integration tests when there is dependency, and unit tests for pieces of code without any dependency?

And if so, how would I test the behavior of UserService? (“If UserRepository returns null, then UserService should return false”, etc.)

Thank you.

testing – Kernel tests: Url toString with language option

In a Kernel test I need to test the result of an url object per language. But the test doesn’t consider the language settings.

Example:

public function register(ContainerBuilder $container) {
    parent::register($container);

    $language = Language::$defaultValues;
    $language('id') = 'de';
    $language('name') = 'German';
    $container->setParameter('language.default_values', $language);
}

protected function setUp() {
    parent::setUp();

    ConfigurableLanguage::createFromLangcode('fr')->save();
    ConfigurableLanguage::createFromLangcode('it')->save();
}

public function testUrls() {
    $languageManager = $this->container->get('language_manager');
    print_r($languageManager->getLanguages());

    $url = Url::fromRoute('<front>');
    $url->setOption('language', $languageManager->getLanguage('fr'));
    print_r($url->toString());
}

Output:

Array
(
    (de)
    (fr)
    (it)
)

# Output of toString()
/

The output of toString should be /fr instead of /

Addendum:
I discovered that the DrupallanguageHttpKernelPathProcessorLanguage outbound processor is missing, when UrlGenerator::processPath() is called from the Kernel test. Therefore, I registered it in the Kernel test’s register method, but that doesn’t solve the problem.

integration testing – Hypothetically if every scenario were covered by an end-to-end tests, would unit tests still have any value?

Note: I’m asking about the strategy behind unit / integration / end-to-end tests, not about classifying tests as one or the other.

More so in the past than present, it was expensive to write and run end-to-end tests for every possible scenario. Now though, for example with the increased use of test fixtures / emulators, or even lower latency / higher query limits to APIs, it’s more feasible. But of course there’s always gonna be human error. We can never be sure we thought of every scenario.

Still I’m asking anyway, hypothetically, given an oracle that tells us every possible scenario, or to put it another way, discounting scenarios other than exactly the ones with end-to-end tests, how might unit tests still be valuable?

What I’m wondering is if, unit-integration tests are kind of an “alternate approach” to thinking about tests, and that’s the advantage of writing them even when aiming for “100%” end-to-end coverage: because they might catch a scenario missed by human error. But eliminate human error in thinking up scenarios, and what do they do?

Some ideas I can come up with (but I’m hoping for even stronger answers!)

  • They encourage a useful coding methodology, e.g. TDD.
  • On a breaking dependency (or dependent API) change, they reveal precisely where.

user research – Does Usability Tests and Product Experience Maps go Under Discovery or Define Stage?

In an ideal world, you would expect to include Usability Testing at any stage that you are trying to design for a user interaction to validate the research and/or assumptions you are making about design decisions.

In the practical world, it is very difficult to schedule projects and UX activities so that they are in sync with product development timelines, and you may discover more things during the project so there is no specific place where Usability Tests should be locked in (and you might find that multiple rounds are scheduled at different stages of the project).

Where you choose to do this and how often you do this (which will determine what you end up doing in the testing process) is generally constrained by project budget, access to users and the process/method you propose for the testing. And then when you get the results back you’ll have to also make adjustments if it turns out that your assumptions were not quite on the mark.

As for the product experience map (not sure exactly what format this entails for your project), it is an artefact that can be used to capture and summarize the main journey/experience for the end-users, and can be created at any stage of the project when you have enough research output to synthesize the information into a usable artefact. However, you should also be continuously updating this document as the research and design matures so that this document is an up-to-date reflection (or versions) of the research and product development cycles.

I think the more you think about the project in terms of the information you need and what you do with the information, the more it will help determine/tailor the activities and the artefacts that you need (there are good articles on this elsewhere). So rather than going with a prescribed method or plan, just have something that is loosely based on the closest approach that fits your project and be prepared to tweak or adjust some of the activities along the way will work the best (this involves putting in some time buffers in the project schedule).

user research – Does Usability Tests and Product Experience Maps go Under Discovery or Testing?

I’m creating education materials for my company on UX to help better educate the team and secure more funding for UX at the company. But as I was reading how we define stages of UX I came across what seems to be a paradox / conflicting information.

The discovery stage to put it very briefly as I’ve understood it is to explore the problems whether it be on your own product or problems prospective users may be facing, building empathy. Then this moves to the Define Stage where the team forms alignment on evidence based results from discovery.

I always assumed Usability Testing was part of the Discovery Stage and you can build personas and experience maps based off of these in the define stage. Because if the Define stage is to form alignment, an experience map allows you to take pain points and convert them into goals which hands off to the development stage.

However in my research I see that Usability testing is not often put in the “Discovery Stage” Example here and here .

In the above examples only user interviews are included. Usability tests can include pre and exit interviews which aid in the discovery stage. Usability tests also help discover what problems users are facing. So why are these left out?

I see Usability Tests are Discovery and Product Experience Mapping as defining.

But I don’t want to commit to this on paper without reaching out to my fellow UXers and hear their thoughts and get some guidance. I never tried to break down the process so defined before and I am running into material that makes me feel my process of DISCOVERY Interview -> Usability Test -> DEFINE Experience Map -> Personas -> DEVELOP STAGE is wrong.

Thanks for all those that contribute in a positive way to this discussion.

testing – Is it necessary to run tests in all environments?

You create tests to prove that something works correctly, and you re-execute those tests to prove that it still works correctly.

For determining if it makes sense to run a particular set of tests in multiple environments, you need to check what the differences are between those environments and if those differences might have an effect on the outcome of the tests.

One important aspect to look at is the codebase being used in each environment. If the DEV environment uses a feature branch or even the code a developer has locally and the STG is the first environment where multiple features can meet each other, then it is important to execute the unittests in both environments, as they each have a different codebase that was not yet tested in another environment.

On the other hand, if the integrated code is already tested in the DEV environment, then the reason “untested codebase” does not exist for running those tests in the STG environment.

Testing in a PRD (production) environment should be restricted to a simple test that the configuration is correct. All other tests should have been performed on the other environments before the code reached PRD.

Twitter Tests 140-Seconds Voice Tweets

Twitter is testing to a limited group on iOS the ability to add up to 140 seconds of voice in a tweet.

design patterns – What to do with a legacy code base with no unit tests and complex architectural structure

I have been working for my company for almost a year now and been primarily focused on adding features and maintaining two 15+-year-old WPF Projects and one 20+-year-old WinForms Project. The codebases have absolutely no unit testing, the philosophy is that every engineer should be able to write bug-free code. The WPF projects are big and follow DDD and try to follow MVVM unless it becomes inconvenient. It does not follow established design patterns so features are added any way the engineer sees fit. There are no code reviews so it’s not hard to see how new members of the team could struggle with the codebase as I have these last couple of months. So I wanted to what would be the most valuable course of action for me to do given the following metrics using Microsofts’ FxCop analyzers.

WPF Project 1

enter image description here

WPF Project 2

enter image description here

I’m overwhelmed by the numbers and don’t know what problem should be the main priority. We have some very high numbers of cyclomatic complexity that would make unit testing really challenging to do. We have some high coupling numbers which beg for refactoring. There are endless places where design patterns can be applied to clean up the code. My main priority now is to fix some of the codebase problems but don’t know what would bring the most value. I feel inclined to just start unit testing like a mad man.

What do you guys think should be the next step after looking at the metrics? Should I start unit testing before refactoring? Are unit tests overrated and adopt my company philosophy and just focus on addressing the cyclomatic complexity and coupling problems? I’m I even asking the right questions here?

Just looking for a bit of guidance as this is a huge challenge for me. I work with a team but taking this initiative on my own with only 6 years of experience, so there is not enough time or money to really address everything. I also realize that whatever the next step is it will include convincing my boss, so any tips on that would also be appreciated.

database – Is splitting unit tests from integration test with mocks worth the effort (in nodejs)?

Well consider a relative simple server for a SPA application. Written in nodejs with express and knex as backends.

Now if we do it properly each function will have unit tests (as they are always all visible to the end user). But besides of the choice of what function to create unit test for, is abstracting away the database actually worth the effort?

A simple function might look like:

function doSomething(request_object, response_object) {
    request_data = analyze_request();
    db_data = grabFromDatabase(request_data); //this is often just a knex query.
    result = reformatDBQuery(db_data);
    response_object.give_response(result);
}

Now to do it properly one would create mocks/stubs that either mock knex library. Or mock the interface methods we use ourselves.
Both cases require a lot of writing, often so much as almost recreating the library ourselves.

The other option is to create only stubs for the exact function arguments required: but this would make tests quite brittle and the most common error I experience (by a large margin) wouldn’t really be catched with this: Unexpected argument values in some function that do work but give nonsensical results that fail at another place.

So a tertiary option is possible: actually just use a database for testing. However this means the unit test is no longer a unit test. And it’s really just an integration test. As this is about the only “complexity” that isn’t trivial code it hence makes little sense to even have unit tests (the other tests are so trivial they can easily be added to the integration tests).

Especially since “launching” a real database connection in javascript is quite fast anyways. And writing mocks for every possible function in a library very, very time consuming.

So what do others do, especially with regards to “simple servers” where complexity is only found in database calls?

EDIT, to clarify, I understand the need for a unit test above integration tests. I wonder however what people do in practice given time constraints typically visible. And wonder how in reality people sidestep the problem of very very time consuming mocks.

xunit – c# writing tests for WriteFileOuput

I am new to writing unit tests and i am facing some difficult writing unit tests for this class.
I am using Xunit and moq.

public class WriteFileOutput: IWriteFileOutput
{
    public string File { get; set; }
    public WriteFileOutput(string fileName)
    {
        File = fileName;
    }
   
    public void writeFile(List<CustomerDistanceRecord> invitees)
    {
        var result = JsonConvert.SerializeObject(invitees, Formatting.Indented);
        if (System.IO.File.Exists(File))
        {
            System.IO.File.Delete(File);
            using(var tw = new StreamWriter(File, true))
            {
                tw.WriteLine(result.ToString());
                tw.Close();
            }
        }
        else if (!System.IO.File.Exists(File))
        {
            using(var tw = new StreamWriter(File, true))
            {
                tw.WriteLine(result.ToString());
                tw.Close();
            }
        }
    }
}

The code below tests that the writefileoutput writes out a file. I am not sure if this is a good unit test and I was wondering if someone could show me a better way of testing it class. Also I am not sure it deleting the file in the constructor before it is called is a good idea either. Thanks for the help.

 private readonly string _path = "../../../TestOutput/CustomerRecords.json";

    public WriteFileTests()
    {
        if (File.Exists(_path))
        {
            File.Delete(_path);
        }
    }

    (Fact)
    public void WriteFileOuput_should_be_called()
    {
        var fileWriteFileOutput = new WriteFileOutput($"{_path}");
        fileWriteFileOutput.writeFile(It.IsAny<List<CustomerDistanceRecord>>());
        Assert.True(File.Exists($"{_path}"));
    }