c++ – Dilemma over authenticity of gcov generated code coverage percentage where unit tests are not technically correct

When I joined my company as a new comer and I was exploring the unit test suite of the product code. It is using gtest framework. But when I checked all the tests, they were testing the whole functionality by calling real functions and asserting expected output. Below is one such test case as an example:

TEST(nle_26, UriExt1)
{
    int threadid = 1;
    std::shared_ptr<LSEng> e = std::make_shared<aseng:: LSEng >(threadid, "./daemon.conf");
    std::shared_ptr<LSAttrib> attr = e->initDefaultLSAttrib();
    e->setLSAttrib( attr );
    std::shared_ptr<DBOwner> ndb = e->initDatabase(datafile,e->getLogger());
    e->loadASData(ndb);
    e->setVerbose();

    std::shared_ptr<NewMessage> m = std::make_shared<NewMessage>(e->getLogger());
    ASSERT_TRUE(m != nullptr);
    ASSERT_TRUE(e != nullptr);
    m->readFromFile("../../msgs/nle1-26-s1");
    e->scanMsg(m, &scan_callBack_26, NULL);
    std::map<std::string, std::vector<std::string>> Parts = e->verboseInfo.eventParts;
    std::vector<std::string> uris = Parts("prt.uri");
    ASSERT_EQ(uris.size(), 2 );
    ASSERT_EQ(uris(0) , "mailto:www.us_megalotoliveclaim@hotmail.com");
    ASSERT_EQ(uris(1) , "hotmail.com");
}

I found all the tests in the unit test directory having the same pattern like:

  1. Creating and initialising actual object
  2. Calling actual function
  3. Starting actual daemon
  4. Loading actual database of size around 45MB
  5. Sending actual mail for parsing to daemon by calling actual scanMsg function, etc.

So, all the tests appear more of as functional tests, rather than unit tests.

But, the critical part is, on their official intranet site, they have projected the code coverage percentage of this product as 73%, computed using gcov.

Now, code profiling tools like gcov computes coverage on the following params:

  1. How often each line of code executes
  2. What lines of code are actually executed
  3. How much computing time each section of code uses.

As, these tests are running actual daemon, loading real database and calling actual functions to scan the message, of course, above 3 params will play some role in it, so I doubt it will be completely zero.

But my bothering questions are:

  1. Black box testing also does functional testing just as this, so what’s the difference between above and functional test?. In blackbox, testers unaware of the inside code, writes test cases to test the functionalities specific to requirements. How above such kind of tests are different than that? So does gcov generated coverage on this test suite, can be trusted or misleading?

  2. Apparently, gcov code coverage data is based on test suite with all technically incorrect unit tests, does it mean the actual code coverage may be even zero?

  3. In unit test, we mock function calls using google mock-like framework rather than calling actual calls Purpose of unit test is to test the code itself, by smallest unit wise. But above tests, seemingly more like functional tests, can gcov generate reliable code coverage data based on it??

This is haunting me for last two days. So thought to serve on the table for experts.

Awaiting wonderful insights 馃檪

Thanks.

c++11 – Dilemma over authenticity of gcov generated code coverage percentage where unit test are not technically correct

When I joined my company as a new comer and I was exploring the unit test suite of the product code. It is using gtest framework. But when I checked all the tests, they were testing the whole functionality by calling real functions and asserting expected output. Below is one such test case as an example:

TEST(nle_26, UriExt1)
{
    int threadid = 1;
    std::shared_ptr<LSEng> e = std::make_shared<aseng:: LSEng >(threadid, "./daemon.conf");
    std::shared_ptr<LSAttrib> attr = e->initDefaultLSAttrib();
    e->setLSAttrib( attr );
    std::shared_ptr<DBOwner> ndb = e->initDatabase(datafile,e->getLogger());
    e->loadASData(ndb);
    e->setVerbose();

    std::shared_ptr<NewMessage> m = std::make_shared<NewMessage>(e->getLogger());
    ASSERT_TRUE(m != nullptr);
    ASSERT_TRUE(e != nullptr);
    m->readFromFile("../../msgs/nle1-26-s1");
    e->scanMsg(m, &scan_callBack_26, NULL);
    std::map<std::string, std::vector<std::string>> Parts = e->verboseInfo.eventParts;
    std::vector<std::string> uris = Parts("prt.uri");
    ASSERT_EQ(uris.size(), 2 );
    ASSERT_EQ(uris(0) , "mailto:www.us_megalotoliveclaim@hotmail.com");
    ASSERT_EQ(uris(1) , "hotmail.com");
}

I found all the tests in the unit test directory having the same pattern like:

Creating and initialising actual object

  1. Calling actual function
  2. Starting actual daemon
  3. Loading actual database of size around 45MB
  4. Sending actual mail for parsing to daemon by calling actual scanMsg function, etc.

So, all the tests appear more of as functional tests, rather than unit tests.

But, the critical part is, on their official intranet site, they have projected the code coverage percentage of this product as 73%, computed using gcov.

Now, code profiling tools like gcov computes coverage on the following params:

  1. How often each line of code executes
  2. What lines of code are actually executed
  3. How much computing time each section of code uses.

As, these tests are running actual daemon, loading real database and calling actual functions to scan the message, of course, above 3 params will play some role in it, so I doubt it will be completely zero.

But my bothering questions are:

  1. Black box testing also does functional testing just as this, so what’s the difference, I can’t understand?. In blackbox, testers unaware of the inside code, writes test cases to test the functionalities specific to requirements. How above such kind of tests are different than that? So does gcov generated coverage on this test suite, can be trusted or misleading or even can be zero?

  2. Apparently, gcov code coverage data is based on test suite with all incorrect unit tests, does it mean the actual code coverage may be even zero?

  3. In unit test, we mock function calls using gmock-like framework rather than calling actual calls Purpose of unit test is to test the code itself, by smallest unit wise. But above tests, seemingly more like functional tests, can gcov generate reliable code coverage data??

This is haunting me for last two days. So thought to serve on the table for experts.

Awaiting wonderful insights 馃檪

Thanks.

google – I cannot conquer code coverage inside Promise “then” and “catch”

I am trying to gain code coverage on my component by implementing a unit test in my spec file, as follows:

home.component.spec.ts

import { HomeService } from './home.service';
import { HttpClientTestingModule } from '@angular/common/http/testing';
import { ComponentFixture, fakeAsync, TestBed, tick } from '@angular/core/testing';

import { HomeComponent } from './home.component';
import { of } from 'rxjs';

describe('HomeComponent', () => {
  let component: HomeComponent;
  let fixture: ComponentFixture<HomeComponent>;

  // stubs
  const registryStub: HomeComponent = jasmine.createSpyObj('HomeComponent', ('getUserData'));
  const fakeNames = {x: 1};

  beforeEach(async () => {
    await TestBed.configureTestingModule({
      imports: (HttpClientTestingModule),
      declarations: (HomeComponent)
    })
      .compileComponents();
  });

  beforeEach(() => {
    fixture = TestBed.createComponent(HomeComponent);
    component = fixture.componentInstance;
    fixture.detectChanges();
  });

  it('should create', () => {
    expect(component).toBeTruthy();
  });


  it('should navigate on promise - success', fakeAsync(() => {

    spyOn(component, 'getUserData').and.returnValue(of(fakeNames));
    (registryStub.getUserData as jasmine.Spy).and.returnValue(Promise.resolve(('test')));
    component.getUserData();

    tick();
    expect(component.getUserData).toHaveBeenCalled();



    // spyOn(component, 'getUserData').and.returnValue(of(fakeNames));
    // component.getUserData().toPromise()
    // .then((data: any) => {
    //   expect(data).toEqual(fakeNames);
    // });




  }));


});

When I run the “ng test –code-coverage” command, I check that the code gap inside the “then” and “catch” blocks are not being covered, as you might see in the illustration below:
code coverage is not being conquered within the red colored area

Could anyone point me to the right direction in order to have a complete code coverage on this component?

By the way, I have a public repo for this:

I look forward to having a reply from you guys

Thanks in advance folks!

test – Mejorar coverage de resultados – Python

He creado una funci贸n para hacer la lectura de un documento excel, que devuelve 2 datasets y una lista.
Despu茅s he creado un test y por 煤ltimo he mirado la cobertura, pero veo que el valor es muy bajo y que no est谩 ejecutando apenas la funci贸n.

Funci贸n:


def xlsx_prep(excel_doc):
    """Deletes the rows which do not fulfill the requirement
    of being banned and takes only the non-repeated pollsters
    which took part and changes the grade into rounded values
    of grade and/ or numerical values of grade

    :param
    excel_doc: (dataset) containing the credibility sorted by pollster
    :return:
    excel_pollster: list with the filtered information of the pollsters
    excel (dataframe): contains the filters and the grades are rounded
    excel_credibility (dataframe): based on 'excel', it transform the grades
    into a numerical value
    """
    # global excel
    excel = pd.read_excel(excel_doc, engine='openpyxl')
    excel = excel(excel('Banned by 538') != 'yes')
    excel_pollster = list(excel('Pollster').unique())

    # round grades to the lower grade
    excel = excel.replace({'538 Grade': {'A+': 'A',
                                         'A-': 'A',
                                         'B+': 'B',
                                         'B-': 'B',
                                         'C+': 'C',
                                         'C-': 'C',
                                         'A/B': 'B',
                                         'B/C': 'C',
                                         'C/D': 'D',
                                         'D+': 'D',
                                         'D-': 'D'}})

    # replace grade alpha values into numerical. valid for ex. 5
    excel_credibility = excel.replace({'538 Grade': {'A': '1',
                                                        'B': '0.5',
                                                        'C': '0',
                                                        'D': '-0.5',
                                                        'F': '-1'}})

    # change type of cell to get the result
    excel_credibility('538 Grade') = pd.to_numeric(excel_credibility('538 Grade'))
    excel_credibility('Predictive    Plus-Minus') = pd.to_numeric(excel_credibility('Predictive    Plus-Minus'))
    excel_credibility('Credibility') = excel_credibility('538 Grade') + excel_credibility('Predictive    Plus-Minus')

    return excel_pollster, excel, excel_credibility

Ahora muestro el test:

class TestDataExpl(unittest.TestCase):

    # Test xlsx_prep function
    def test_xlsx_prep(self):
        print("Starting test_xlsx_prep")
        a, b, c = xlsx_prep(excel_doc)
        self.assertEqual(a(1), 'Selzer & Co.')
        self.assertEqual(b('538 Grade')(0), 'A')
        self.assertEqual(c('538 Grade')(0), 1.0)

if __name__ == "__main__":
    suite = TestSuite()
    suite.addTest(TestAddOne("test_xlsx_prep"))

    TextTestRunner().run(suite)

Por 煤ltimo, indico el c贸digo que ejecuto para comprobar la cobertura

coverage run xlsx_prep.py
coverage report

El resultado es:
introducir la descripción de la imagen aquí
introducir la descripción de la imagen aquí
introducir la descripción de la imagen aquí

No s茅 si es que de verdad no estoy ejecutando en el test suficientes casos o es que no estoy haciendo bien la llamada a la funci贸n para que ejecute el test y por eso despu茅s obtengo una cobertura tan baja.

Qu茅 estoy haciendo mal?

Muchas gracias.

mg.metric geometry – On triangulations and “coverage” of circumcircles

Let $P$ be a convex quadrilateral defined by four vertices $a$, $b$, $c$, and $d$. Suppose that the circumcircle of $triangle abd$ contains $c$.* Let $D(triangle abc)$ to denote the area enclosed by the circumcircle of $triangle abc$. My claim is that

$$D(triangle abc) cup D(triangle acd)subseteq D(triangle abd) cup D(triangle bcd).$$

Any tips for proving this statement? Below is an example鈥攃learly the red area is contained within the blue. Here is an interactive example I made: https://www.geogebra.org/calculator/pzw75awc.

enter image description here

*In the context of Delaunay triangulations, $triangle abd$ and $triangle bcd$ do not satisfy the Delaunay condition. Edge $ac$ is an illegal edge, and the Delaunay triagulation of $P$ consists of $triangle abc$ and $triangle acd$.

“Insufficient HTTPS Coverage” in Google Search Console… for pages that are being redirected to HTTPS

I’m getting an “Insufficient HTTPS coverage on your site” error in my domain property in Google Page Experience, which is strange because my entire site uses HTTPS and has done so for a long time. So I go and search for HTTP only URLs in my site… but the only ones that Google shows me are pages that already have HTTPS versions. For example: if my site is https://example.com, Google is showing me http://example.com as an HTTP page in my site.

Now, all my HTTP URLs are redirected using 301 to the proper HTTPS ones, and have been for a long time. The only thing I didn’t do was to remove my HTTP site property when I migrated years ago; I just added a new HTTPS site property.

What do I do now with Google’s warning? Do I ignore it? Do I remove the HTTP property?

seo – Unknown long phrases searched on my website reported in search console coverage

I checked my search console coverage. It seems that many long phrases are being searched on my site, and I have many noindex pages in coverage, but I have never made such a page.
is my website hacked. I don’t know what to do now.
Please see the following screenshot:

excluded by noindex tag

Unknown long phrases searched on my website reported in search console coverage

I was looking my search console coverage. it seems many long phrases searched on my website and I have many noindex pages error in coverage. some people said your website is hacked. what should i do?
please see Screenshot:

Excluded by 鈥榥oindex鈥 tag

鉂昇EWS – Newsweek makes positive coverage of Bitcoin | Proxies123.com

Newsweek makes positive coverage of Bitcoin

Newsweek, the well-known weekly in the United States, has taken a positive turn from its previous views, this time evaluating whether Bitcoin can become the new gold standard.
The article appeared on Wednesday, January 6, there it talks about the potential possible price of BTC of $ 146,000, based on the latest model from JPMorgan.
Reporter Scott Reeves wrote: “All that glitters is not gold, but it could be Bitcoin,” he also added, “And in the long run, it could be more valuable.”
It is fair to say that new users are entering in the long term therefore we should also invest an amount of BTC in the long term, that is to say leave it in hold.
What do you think about investing in BTC for the long term?
Would you be willing to invest for a term of between 3 to 5 years?

Why is there a bright edge in my photos even though my studio lights provide even coverage?

You haven’t told us exactly which Paul Buff Alien Bees flashes you’re using, but many studio flashes take longer to release their energy than most speedlights do. For the most part, camera’s flash sync (X-sync) ratings are based on using the camera with that brand of camera’s in-house speedlight selection sitting directly on the hot shoe or using the camera’s built-in popup flash. This is particularly the case with cameras that do not have a PC port鹿 used to send a “fire” signal to external flashes without using the hot shoe connection.

In many cases, this means you must shoot with an exposure time longer (“slower shutter speed”) than your camera’s flash sync setting (X-sync speed). If you use an exposure time shorter than the time it takes the flash to fully fire, the second shutter curtain of your camera will begin closing before the flashes have produced all of its light. The parts of the sensor covered by the second shutter curtain as it is closing will be dimmer than the parts of the frame not covered by the second curtain until after the flash has released most or all of its energy.

Your Canon EOS Rebel T3/1100D has a flash sync setting of 1/200 seconds. That is equal to 5 micro-seconds. The Alien Bees B1600, for example, has a T.1 flash duration at full power of 1/300 second, or 3.333 microseconds. So far so good. The flash can release 90% of it’s burst of light in a shorter time than the camera’s X-sync speed. But then you have to factor in the delay between the time your camera signals the Cactus transmitter to “fire” the flash and the time the Cactus receiver(s) tells the flash(es) to “fire”. If there is any appreciable delay, then at 1/200 seconds your second shutter curtain will begin closing before the flash has released all of its light.

One thing I would check is to be sure you haven’t accidentally dialed in some delay in your Cactus V6 II transmitter’s settings. Anything greater than a delay of 1.67 microseconds (5ms minus 3.33ms) will mean the flash begins firing too late to complete its burst of light before your second shutter curtain is closing. In practice any delay dialed in would need to be shorter than that, because it takes time for the radio trigger’s microprocessors to react to the camera’s “fire” signal, to then send the radio pulse to “fire”, which is an encoded radio signal that has a more than instantaneous length to it, and then for the receiver(s) microprocessors to decode that signal and send the “fire” command to the flash(es) to which it is connected.

As a proof of concept that the shutter curtain beginning to close before the flash reaches T.1 is the root cause of your issue, take a few test shots with increasingly longer exposure times – 1/200, 1/160, 1/125, 1/100, 1/60, 1/30, etc. – and see if the bright area on one side of the frame gets increasing wider towards the “far” side from where you are currently getting the bright strip.

鹿 PC in the context of flash photography has nothing to do with a personal computer. It is an abbreviation of Prontor/Compur. Prontor has its origins in the Italian word pronto (quick) and was a brand of shutter produced by Alfred Gauthier in the 1950s. Compur, derived from the word compound, was the shutter brand of the Deckel Company. Both companies were based in Germany and both counted Zeiss as an influential stockholder when they introduced the standard 1/8″-inch coaxial connector for shutter/flash synchronization.