testing – Communicating with the software team

testing – Communicating with the software team – Software Engineering Stack Exchange

c# – SmartUnit: Unit Testing with DI

I was thinking about how unit tests require you to manually instantiate all your dependencies, even though many of them are the same for every test or don’t really matter (such as logging, where people usually just inject mocks). As a possible solution to this problem, I wrote a unit test framework that injects any known dependencies into the class or method, or null if a dependency is not known. It also supports theory-type tests with nested methods. Here are some quick examples before we dive into the code:

Declaring a test

(Assertion)
public void MyTest() {}

Configuring dependencies

public interface IBar {}
public class Bar : IBar {}
public class Foo
{
   public Foo(IBar bar) {}
}

public class AssertionConfiguration : AssertionSet
{
   public override void Configure()
   {
       this.AddSingleton<Foo>();
       this.AddSingleton<IBar, Bar>();
   }
}

Using the AssertionSet attribute

This goes on either a class, where it applies to all tests, or on a method, where it overrides any assertion set declared on the class.

(AssertionSet(typeof(AssertionConfiguration)))
(Assertion)
public void MyTest(Foo foo, IBar bar) {}

Theory-type tests

public void MyTest((Callback) Action action)
{
    action();

    (Assertion)
    public void Foo() {}

    (Assertion)
    public void Bar() {}
}

Skipping a test

Apply the Skip attribute to the test method.

(Skip(Reason = "Doesn't work because ..."))

Assertions

A successful test is considered to be one that completes without exceptions, so while these assertion extension methods are provided, any assertion library will work.

obj.AssertThat<MyType>(o => o == 1);
obj.AssertThatAsync<MyType>(async o => (await o.GetSomething()) == 1);
obj.AssertException<MyType, Exception>(o => o.ThrowException());
obj.AssertExceptionAsync<MyType, Exception>(async o => await o.ThrowException());

Known Limitations

Each test is run in a new instance of its containing class. This prevents accidental shared state between tests and allows tests to override the assertion set used. It also prevents intentional shared state between tests, but that can be overcome by defining and injecting a singleton of state.

Test names must be unique; otherwise, the adapter won’t be able to find a match, as it only looks methods up by name. You can’t have Foo(IFoo foo, Bar bar) and Foo(Bar bar, IFoo foo), either as parent tests or nested theory tests. The test discoverer will find them, but the runner will crash running them; this could be mitigated with a VS analyzer, but I haven’t written one yet.

Test method return types must be awaitable. void, Task, ValueTask, and other awaitable types are good. int and other non-awaitable types will crash.

Top-level methods (available in C# 9) cannot be theories. The reason behind this that nested methods are generated as top-level methods with a non-speakable name (something like g__Name|0__0). All top-level methods are given non-speakable names and treated as if they were a child of a Main method, so there is no way for me to identify which parent method they belong to.

Testing framework

namespace SmartUnit
{
    (AttributeUsage(AttributeTargets.Method))
    public class AssertionAttribute : Attribute
    {
        public string? Name { get; set; }
    }

    (AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false))
    public class AssertionSetAttribute : Attribute
    {
        public Type AssertionSetType { get; }

        public AssertionSetAttribute(Type assertionSetType)
        {
            if (assertionSetType.BaseType != typeof(AssertionSet))
            {
                throw new ArgumentException(null, nameof(assertionSetType));
            }

            AssertionSetType = assertionSetType;
        }
    }

    (AttributeUsage(AttributeTargets.Parameter))
    public class CallbackAttribute : Attribute { }

    (AttributeUsage(AttributeTargets.Method))
    public class SkipAttribute : Attribute
    {
        public string? Reason { get; set; }
    }

    public abstract class AssertionSet : ServiceCollection
    {
        public abstract void Configure();
    }

    internal class AssertionException : Exception
    {
        internal AssertionException(string? message) : base(message) { }
    }

    public static class AssertExtensions
    {
        public static T AssertThat<T>(this T obj, Func<T, bool> assertion, string? failureMessage = null)
        {
            if (assertion(obj))
            {
                return obj;
            }

            throw new AssertionException(failureMessage);
        }

        public static async Task<T> AssertThatAsync<T>(this T obj, Func<T, Task<bool>> assertion, string? failureMessage = null)
        {
            if (await assertion(obj))
            {
                return obj;
            }

            throw new AssertionException(failureMessage);
        }

        public static T AssertException<T, TException>(this T obj, Action<T> assertion, string? failureMessage = null) where TException : Exception
        {
            try
            {
                assertion(obj);
            }
            catch (TException)
            {
                return obj;
            }

            throw new AssertionException(failureMessage);
        }

        public static async Task<T> AssertExceptionAsync<T, TException>(this T obj, Func<T, Task> assertion, string? failureMessage = null) where TException : Exception
        {
            try
            {
                await assertion(obj);
            }
            catch (TException)
            {
                return obj;
            }

            throw new AssertionException(failureMessage);
        }
    }
}

Visual Studio test runner

namespace SmartUnit.TestAdapter
{
    (FileExtension(".dll"))
    (FileExtension(".exe"))
    (DefaultExecutorUri(ExecutorUri))
    (ExtensionUri(ExecutorUri))
    (Category("managed"))
    public class TestRunner : ITestDiscoverer, ITestExecutor
    {
        public const string ExecutorUri = "executor://SmartUnitExecutor";

        private CancellationTokenSource cancellationToken = new CancellationTokenSource();

        public void DiscoverTests(IEnumerable<string> sources, IDiscoveryContext discoveryContext, IMessageLogger logger, ITestCaseDiscoverySink discoverySink)
        {
            foreach (var testCase in DiscoverTestCases(sources))
            {
                discoverySink.SendTestCase(testCase);
            }
        }

        public void Cancel()
        {
            cancellationToken.Cancel();
        }

        public async void RunTests(IEnumerable<TestCase> tests, IRunContext runContext, IFrameworkHandle frameworkHandle)
        {
            foreach (var testCase in tests)
            {
                if (cancellationToken.IsCancellationRequested)
                {
                    break;
                }

                await RunTestCase(testCase, frameworkHandle);
            }
        }

        public async void RunTests(IEnumerable<string> sources, IRunContext runContext, IFrameworkHandle frameworkHandle)
        {
            foreach (var testCase in DiscoverTestCases(sources))
            {
                if (cancellationToken.IsCancellationRequested)
                {
                    break;
                }

                await RunTestCase(testCase, frameworkHandle);
            }
        }

        private IEnumerable<TestCase> DiscoverTestCases(IEnumerable<string> sources)
        {
            foreach (var source in sources)
            {
                var sourceAssemblyPath = Path.IsPathRooted(source) ? source : Path.Combine(Directory.GetCurrentDirectory(), source);

                var assembly = Assembly.LoadFrom(sourceAssemblyPath);
                var tests = assembly.GetTypes()
                    .SelectMany(s => s.GetMethods(BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Static | BindingFlags.Instance))
                    .Where(w => w.GetCustomAttribute<AssertionAttribute>() is not null)
                    .ToList();

                foreach (var test in tests)
                {
                    var testDisplayName = test.Name;
                    if (test.Name.StartsWith('<') && !test.Name.StartsWith("<<Main>$>"))
                    {
                        var parentTestName = test.Name.Split('>')(0)(1..);
                        testDisplayName = parentTestName + '.' + test.Name.Split('>')(1)(3..).Split('|')(0);
                    }
                    if (test.Name.StartsWith("<<Main>$>"))
                    {
                        var parentTestName = test.Name.Split('>')(0)(2..);
                        testDisplayName = parentTestName + '.' + test.Name.Split('>')(2)(3..).Split('|')(0);
                    }

                    var assertionAttribute = test.GetCustomAttribute<AssertionAttribute>()!;
                    var testCase = new TestCase(test.DeclaringType!.FullName + "." + test.Name, new Uri(ExecutorUri), source)
                    {
                        DisplayName = string.IsNullOrEmpty(assertionAttribute.Name) ? testDisplayName : assertionAttribute.Name,
                    };

                    yield return testCase;
                }
            }
        }

        private MethodInfo GetTestMethodFromCase(TestCase testCase)
        {
            var sourceAssemblyPath = Path.IsPathRooted(testCase.Source) ? testCase.Source : Path.Combine(Directory.GetCurrentDirectory(), testCase.Source);
            var assembly = Assembly.LoadFrom(sourceAssemblyPath);

            var fullyQualifiedName = testCase.FullyQualifiedName;
            var nameSeparatorIndex = fullyQualifiedName.LastIndexOf('.');
            var typeName = fullyQualifiedName.Substring(0, nameSeparatorIndex);

            var testClass = assembly.GetType(typeName);
            return testClass.GetMethod(fullyQualifiedName.Substring(nameSeparatorIndex + 1), BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Static | BindingFlags.Instance);
        }

        private void RecordSkippedTest(TestCase testCase, string? reason, ITestExecutionRecorder recorder)
        {
            var now = DateTime.Now;
            var testResult = new TestResult(testCase)
            {
                Outcome = TestOutcome.Skipped,
                StartTime = now,
                EndTime = now,
                Duration = new TimeSpan(),
                DisplayName = testCase.DisplayName,
                ErrorMessage = reason
            };

            recorder.RecordResult(testResult);
        }

        private void RecordPassedTest(TestCase testCase, DateTime start, DateTime end, ITestExecutionRecorder recorder)
        {
            var testResult = new TestResult(testCase)
            {
                Outcome = TestOutcome.Passed,
                StartTime = start,
                EndTime = end,
                Duration = end - start,
                DisplayName = testCase.DisplayName
            };

            recorder.RecordResult(testResult);
        }

        private void RecordFailedTest(TestCase testCase, DateTime start, DateTime end, Exception ex, ITestExecutionRecorder recorder)
        {
            var testResult = new TestResult(testCase)
            {
                Outcome = TestOutcome.Failed,
                StartTime = start,
                EndTime = end,
                Duration = end - start,
                DisplayName = testCase.DisplayName,
                ErrorMessage = ex.Message,
                ErrorStackTrace = ex.StackTrace
            };

            recorder.RecordResult(testResult);
        }

        private async ValueTask RunTestCase(TestCase testCase, ITestExecutionRecorder recorder)
        {
            var testMethod = GetTestMethodFromCase(testCase);
            if (testMethod.GetCustomAttribute<SkipAttribute>() is not null)
            {
                RecordSkippedTest(testCase, testMethod.GetCustomAttribute<SkipAttribute>()!.Reason, recorder);
                return;
            }

            recorder.RecordStart(testCase);
            var start = DateTime.Now;

            try
            {
                if (testMethod.Name.StartsWith('<') && !testMethod.Name.StartsWith("<<Main>$>"))
                {
                    await RunNestedTest(testMethod);
                }
                else
                {
                    await RunTest(testMethod);
                }

                var end = DateTime.Now;
                RecordPassedTest(testCase, start, end, recorder);
            }
            catch (Exception ex)
            {
                var end = DateTime.Now;
                RecordFailedTest(testCase, start, end, ex.InnerException ?? ex, recorder);
            }
        }

        private async ValueTask RunNestedTest(MethodInfo test)
        {
            var parentMethodName = test.Name.Split('>')(0).Substring(1);
            var parentMethod = test.DeclaringType!.GetMethod(parentMethodName);

            await RunTest(parentMethod, test);
        }

        private async ValueTask RunTest(MethodInfo test, MethodInfo? callback = null)
        {
            var assertionSetAttribute = test.DeclaringType!.GetCustomAttribute<AssertionSetAttribute>();
            if (test.GetCustomAttribute<AssertionSetAttribute>() is not null)
            {
                assertionSetAttribute = test.GetCustomAttribute<AssertionSetAttribute>();
            }

            if (assertionSetAttribute is null)
            {
                await RunTestWithoutAssertionSet(test, callback);
            }
            else
            {
                await RunTestWithAssertionSet(test, callback, assertionSetAttribute.AssertionSetType);
            }
        }

        private async ValueTask RunTestWithoutAssertionSet(MethodInfo test, MethodInfo? callback)
        {
            var parameters = test.GetParameters().Select(s =>
            {
                if (s.GetCustomAttribute<CallbackAttribute>() is not null && callback is not null)
                {
                    return Delegate.CreateDelegate(s.ParameterType, callback);
                }

                if (s.ParameterType.IsInterface)
                {
                    var mock = (Mock)Activator.CreateInstance(typeof(Mock<>).MakeGenericType(s.ParameterType))!;
                    return mock.Object;
                }

                return null;
            }).ToArray();

            var typeInstance = test.DeclaringType!.IsAbstract && test.DeclaringType.IsSealed ? null : Activator.CreateInstance(test.DeclaringType);
            if (test.DeclaringType.IsAbstract && test.DeclaringType.IsSealed)
            {
                await InvokeTest(test, typeInstance, parameters);
            }
            else
            {
                await InvokeTest(test, typeInstance, parameters);
            }
        }

        private async ValueTask RunTestWithAssertionSet(MethodInfo test, MethodInfo? callback, Type assertionSetType)
        {
            var assertionSetInstance = Activator.CreateInstance(assertionSetType) as AssertionSet;
            assertionSetInstance!.Configure();
            assertionSetInstance.AddSingleton(test.DeclaringType!);

            var provider = assertionSetInstance.BuildServiceProvider();
            var parameters = test.GetParameters().Select(s =>
            {
                if (s.GetCustomAttribute<CallbackAttribute>() is not null && callback is not null)
                {
                    return Delegate.CreateDelegate(s.ParameterType, callback);
                }

                var service = provider.GetService(s.ParameterType);
                if (service != null)
                {
                    return service;
                }

                if (s.ParameterType.IsInterface)
                {
                    var mock = (Mock)Activator.CreateInstance(typeof(Mock<>).MakeGenericType(s.ParameterType))!;
                    return mock.Object;
                }

                return null;
            }).ToArray();

            var typeInstance = test.DeclaringType!.IsAbstract && test.DeclaringType.IsSealed ? null : provider.GetRequiredService(test.DeclaringType);
            await InvokeTest(test, typeInstance, parameters);
        }

        private async ValueTask InvokeTest(MethodInfo methodInfo, object? typeInstance, object?()? parameters)
        {
            var isAwaitable = methodInfo.ReturnType.GetMethod(nameof(Task.GetAwaiter)) != null;
            if (isAwaitable)
            {
                await (dynamic)methodInfo.Invoke(typeInstance, parameters)!;
            }
            else
            {
                methodInfo.Invoke(typeInstance, parameters);
            }
        }
    }
}

Additional Resources

GitHub

testing – How to simulate test data to a database?

The best way to get the data for test in is the same way that it will get in in production, because its the most realistic.

So the question here is how exactly does this third part software get its data in? inserts, a sproc, an API, SSIS? use the same method to insert your test data.

If you don’t know or cant tell, then you could run the tool and check the database log, or monitor the network traffic. perhaps you can even rerun the transactions

Best way to keep development and testing in synch

Incentives

Are they in the right place for what you want?

From what I’m hearing the developers have a vested interest to merge code as quickly as possible, potentially even fast to the detriment of other needs. As you’ve pointed out they are rewarded for getting their feature branch into the next build.

The QA on the other hand has a vested interest in ensuring good test coverage, even if that means a feature branch does not make the next build.

That’s a conflict of interest.

That is not necessarily a bad thing. Adversarial testing is a good way to uncover bugs. But on a collaborative project, conflict always means going slower.

Values

So what is it that you value? When it comes down to it what is it you will pick and forgo the others?

In this situation is it:

  1. We want features, the people can beta-test for us!
  2. We want well tested code! Features?! what are those foreign things? Never heard of them!

If the value is to get the features to the people, then the devs incentives are right. The QA is, and always will be playing catch up.

If the value is to ensure only tested code gets through, then the incentives need to change. It must be made apparent to the devs that it isn’t enough to have written the feature and their own unit tests. The feature isn’t going anywhere till QA have written their own tests, and that they pass.

Now though you will have an issue in the team balance. 3 Devs vs 1 QA is unbalanced. You could rectify this by hiring more QAs, but the other solution is to get the developers assisting in the QA work for the other side, with the QA providing a 4 eye check on the testing they do.

ie. The front end developer can test the backend system, and the backend devs can test the front end.

By doing this the devs gain an appreciation of testing, and the learning they do will feed back into the way they develop the system, making it easier to test in future. It will also improve test coverage, and because developers love to optimise probably improve test stability and execution speed. It also draws the devs into thinking of the QA as on their side, which should lesson resentment when their branch doesn’t make the next build.

usability testing – How do you get users to think aloud?

Use a rubber duck.

No, seriously!

Put a little rubber duck near the user. Tell the user the rubber duck’s name – the more non-fitting, the better. Mine is called Frank The Duck.

You see, Frank The Duck is a bit dumb. I tell the user that Frank doesn’t know how to use the system, and so I need to teach it. The problem, however, is that since I’m a technical person, I’m really bad in explaining the things that need to be explained in practical, everyday terms – regular English.

So, now, the user has a little humorous mission to undergo with me. I’m putting the user on a position of power, making him teach someone, giving him the reins of the experience. I tell the user that I’m the incompetent one, so the user feel more confident about himself and what should be done. And, them, I incentive the user to explain stuff as he would teach someone.

Teaching is a interesting experience. It puts your brain on a different mode that changes a lot how you think and how you speak, creating some sort of direct bridge between your ideas and your mouth. If the user needs to explain, he will feel that what he needs to do is a bit pointless – you already know what he is doing, why should he bother to do anything like that? However, if the user is going to guide someone into using the system, things change. The user is in control now, he is the guide. He is the responsible for getting the task done.

You can get some experience from Let’s Play channels on YouTube. Those people rarely talk when they are playing alone at home. But, when they have an audience – even a silent one – they become really talkative, explaining every little bit of what they are doing, mixing it up with some lateral commentary here and there. Their real audience, however, is just a camera, a lifeless electronic device nearby. Have you ever had a friend by your home while playing videogames, or even a little sibling? The effect is pretty much the same.

Don’t ask them to explain – ask them to teach. To guide you. To show you stuff. Let them be responsible for the trip. The extra confidence will give you very nice results.

How can you perform load testing in Tableau?

Can anyone explain How can you perform load testing in Tableau

unit testing – Testable pattern for bytecode interpreter

I am developing virtual machine for prototype-oriented language. And i found one problem.

I dont know how to write interpreter that can be tested using unit tests – naive implementation of interpret needs whole environment to do anything and making unit testing impossible.

Only idea i have is too split bytecode executor(executing bytecodes) and environment holder (holding frames etc.)
But i think this is not enough.
Is there any document about developing testable interpreters?

performance – While testing our workload on SQL 2019 we experience much higher CPU utilization compare to the same test on SQL 2014

We are getting ready to upgrade our SQL 2014 systems to SQL Server 2019. As part of our due diligence we created a workload which we are testing against both systems. What we have observed was the following:

  1. The query performance on average is about 50% faster on SQL 2019 – good!
  2. The reads are about 40% less on SQL 2019 – good!
  3. The CPU utilization is about 30% higher on average – not good.
    This last point is what causes our concern. Does this means that we have to plan to increase our CPU capacity as part of our migration to SQL 2019?

To describe what we are seeing from slightly different angle: when we attempt to ramp up our workload by pushing higher number queries/sec, we are seeing lower throughput on SQL 2019 because we max out CPU earlier and start seeing errors as a result.

I hope this makes sense and I wonder if the others had similar experience?

Thank you!

A library required which generates dictionary attack passwords (for testing my theory)

I have calculated that 15 characters long password will take more than 8000 years to crack by the most powerful supercomputer available today.

My theory is that – Even if passwords are simple and easy to remember but if they are atleast 15 characters long, then these passwords are unbreakable.

So, according to me a simple but long password like – “iloveunitedstates” is an unbreakable password.

But someone pointed out that using dictionary attack, this password can be hacked easily.

But I am not convinced that this password can be broken.

So, I wanted to test it myself. So, is there a library or tool that generates dictionary attack passwords, so that I can use it to see if the password “iloveunitedstates” is crackable or not.

Please let me know if there is such a library or tool.

networking – High packet loss and latency bursts on Ubuntu (wifi) but not on macbook testing at the same time?

I have a tp link t4uh

https://www.tp-link.com/us/support/download/archer-t4uh/ on Ubuntu 20.04 and Some Airport Extreme with Broadcom firmware on the macbook air.

The network is bad unfortunatley but it seems like things are way worse on my ubuntu setup. I am thinking of buying a new adapter just to see but am wondering if there is anything else to debug on the ubuntu side to see why so much packet loss is happening. I think the signal strength is -47 dBm on ubuntu and -57 on the mac device. What is very suspicious is that the noise level is reported 0db on ubuntu and -95 dBm on the mac. (see below for ubuntu iwconfig) output)

enx503eaa4de20b  IEEE 802.11bgn  ESSID:"TALKTALK01FBF8"  Nickname:"<WIFI@REALTEK>"
          Mode:Managed  Frequency:2.412 GHz  Access Point: 40:9B:CD:01:FB:F8   
          Bit Rate:130 Mb/s   Sensitivity:0/0  
          Retry:off   RTS thr:off   Fragment thr:off
          Power Management:off
          Link Quality=100/100  Signal level=-45 dBm  Noise level=0 dBm
          Rx invalid nwid:0  Rx invalid crypt:0  Rx invalid frag:0
          Tx excessive retries:0  Invalid misc:0   Missed beacon:0

DreamProxies - Cheapest USA Elite Private Proxies 100 Private Proxies 200 Private Proxies 400 Private Proxies 1000 Private Proxies 2000 Private Proxies ExtraProxies.com - Buy Cheap Private Proxies Buy 50 Private Proxies Buy 100 Private Proxies Buy 200 Private Proxies Buy 500 Private Proxies Buy 1000 Private Proxies Buy 2000 Private Proxies ProxiesLive Proxies-free.com New Proxy Lists Every Day Proxies123