active directory – srv records in a Samba AD

In a debian samba server environment a second domain controller was added.
The plan is to move over to this new one, and later add another one.
I want to be sure it all works fine, though looking in DNS.
I noted that the various service locater records all point to the old domain server.
Despite we did a role transfer to the new server

I’d assume they should point to the new DC or at least both.

I’m curious how it should be as i’m not sure about how samba handles this vs windows.
I got a windows background and samba is kinda new te me.

sql server – How to delete records with FK constraint using Delete Trigger

I’m learning MS SQL Server and trying to implement a trigger to delete records in the child table if the record in the parent table is deleted. I’m using AdventureWorksDW database provided my Microsoft.

I have two tables DimProductSubcategory and DimProduct. So if I delete a product category in DimProductSubcategory, all related records in DimProduct should also be deleted.

So I have created a trigger:

CREATE trigger on_delete_trigger
    on DimProductSubcategory
    after delete
    as
    begin
        SET NOCOUNT ON;
        DELETE FROM DimProduct WHERE (DimProduct.ProductSubcategoryKey IN (SELECT DimProduct.ProductSubcategoryKey FROM DimProductSubcategory))
    end

But when I try to delete a record in DimProductSubcategory I get:

The DELETE statement conflicted with the REFERENCE constraint "FK_DimProduct_DimProductSubcategory". 
The conflict occurred in database "AdventureWorksDW2019", table "dbo.DimProduct", column 'ProductSubcategoryKey'.

I understand the meaning of the error message, but I do not understand the reason for it. I thought the trigger was supposed to delete all child records so that I can delete the parent record without violating data integrity.

Although I’m not 100% sure I got my DELETE statement right.

So how can I implement a trigger to delete child records when a parent record is deleted?

What is the recommended way to process buggy SPF (DNS TXT) records?

Given an SPF record with one or more invalid entries.

Eg: v=spf1 ip4:1.2.3.4/24 ip6:fe80:0::/64 fe80:1::/64 mx ~all

As you can see there is an invalid entry (missing ip6: mechanism) following 2 valid ones. What is the recommendation, what should the SPF validator report? Should it pass (provided that the sender matches the first or second rule) or should it hardfail due to parse error?

I guess most MTA do softfail, because I came across an SPF record like this on a fairly popular service, and people complains only on delayed delivery, not on not delivering at all.

data tables – How to stop user loading more records on grid

In one legacy UI application on which I am working currently, has a Main List which is used to display all records stored in elastic in paginated way. Elastic may have more than 1 million records.

The problem arises when I added a functionality of loading next page of records when scroll touches bottom (no direct pagination widget).

In the introduced functionality of scroll load, user can scroll infinitely. for each “scroll at the bottom” event, data loads from 0 to pageNo*pageSize records. I set page size as small as 50 records. So, for example,

  • Page 1 will load 50
  • Page 2 – 100 records
  • Page 3 – 150 records
  • Page 4 – 200 records

and so on..

I want to acknowledge user after a certain page number like 20 pages(1000 records loaded on browser) that
Don’t go further otherwise application will become slow, use search with keywords instead.

Actually, I am doing so by putting simple auto hide alert which will start appearing from 20 pages onward and will display in interval of 5 pages like 20, 25,30 etc.

Is there any better way to acknowledge user that he is exploiting functionality because we are allowing him to do so?

I know this type of data loading sounds crazy but this is what we have to do. I need help on letting user know that application will be slow after a certain period of time.

Thanks in advance.

schema – Two records in the same table must be related somehow

I’ve inherited a somewhat strange table whose records must somehow be linked to each other. In the real world, these two “structures” are combined into one larger “structure”, so there should be a way of linking them.

Should I use an “association table” to link the two IDs, or is there a better way for this sort of thing? Perhaps I’ve answered my own question, but I thought I’d ask if there’s another way…

Another option would be to delete the second record, at the same time as I add a reference to that second record as a new column, in the first record.

Maybe this is a ridiculous open-ended question for this site, but surely I’m not the first person to run into this…

Thanks,
Sean

8 – Setting node ID sequence to a particular higher value for future created records

We have an organic groups based portal in D7 where many blood banks add records. We recently initiated Drupal 8 migration and are close to completion. However, we would want to add selective blood banks first to the new D8 portal, have it tested before we ask all the other blood banks to use the new portal.

Is it possible to set a higher NID value when new records are created by new users? Migration data for remaining customers can happen seamlessly as their NID values will be lower.

Thanks.

postgresql – Postgres group records by consecutive types

I would like to group objects by their consecutive start and end date.

+----------+----------------------------+
|  Fruit   | Time                       |
+----------+----------------------------+
| Apple    | 2020-09-08 00:00:00.000000 | 
| Apple    | 2020-09-08 01:00:00.000000 | 
| Orange   | 2020-09-08 02:00:00.000000 | 
| Orange   | 2020-09-08 03:00:00.000000 | 
| Apple    | 2020-09-08 04:00:00.000000 | 
+----------+---------------+------------+

The results should look like this:

+----------+----------------------------+----------------------------+
|  Fruit   | Start Time                 | End Time                   |
+----------+----------------------------+----------------------------+
| Apple    | 2020-09-08 00:00:00.000000 | 2020-09-08 01:00:00.000000 |
| Orange   | 2020-09-08 02:00:00.000000 | 2020-09-08 03:00:00.000000 |
| Apple    | 2020-09-08 04:00:00.000000 | 2020-09-08 04:00:00.000000 |
+----------+----------------------------+----------------------------+

Can I point a domain to two web hosting services by adding A records in my DNS Records?

Can I point a domain to two web hosting services by adding A records in my DNS Records?

What happens if I do that?

python – Parse selected records from empty-line separated file

This is my first post here and I hope I will get some recommendations to improve my code. I have a parser which processes the file with the following structure:

SE|43171|ti|1|text|Distribution of metastases...
SE|43171|ti|1|entity|C0033522
SE|43171|ti|1|relation|C0686619|COEXISTS_WITH|C0279628

SE|43171|ab|2|text|The aim of this study...
SE|43171|ab|2|entity|C2744535
SE|43171|ab|2|relation|C0686619|PROCESS_OF|C0030705

SE|43171|ab|3|text|METHODS Between April 2014...
SE|43171|ab|3|entity|C1964257
SE|43171|ab|3|entity|C0033522
SE|43171|ab|3|relation|C0085198|INFER|C0279628
SE|43171|ab|3|relation|C0279628|PROCESS_OF|C0030705

SE|43171|ab|4|text|Lymph node stations...
SE|43171|ab|4|entity|C1518053
SE|43171|ab|4|entity|C1515946

Records (i.e., blocks) are separated by an empty line. Each line in a block starts with a SE tag; the text tag always occurs in the first line of each block (in the 4th field). The program extracts:

  1. All relation tags in a block, and
  2. Corresponding text (i.e., sentence ID (sent_id) and sentence text (sent_text)) from the first line of the block, if relation tag is present in a block. Please note that the relation tag is not necessarily present in each block.

Below is a mapping dictionary between tags and related fields in a file and a main program.

# Specify mappings to parse lines from input file
mappings = {
        "id": 1,
        "text": {
            "sent_id": 3,
            "sent_text": 5
        },
        "relation": {
            'subject': 5,
            'predicate': 6,
            'object': 7,
        }
    }

Finally a code:

def extraction(file_in):
    """This function extracts lines with 'text' and 'relation'
    tag in the 4th field."""
    extraction = {}
    file = open(file_in, encoding='utf-8')
    bla = {'text': ()}
    for line in file:
        results = {'relations': ()}
        if line.startswith('SE'):
            elements = line.strip().split('|')
            pmid = elements(1)
            
            if elements(4) == 'text':
                tmp = {}
                for key, idx in mappings('text').items():
                    tmp(key) = elements(idx)
                bla('text').append(tmp)
            
            if elements(4) == 'relation':
                tmp = {}
                for key, ind in mappings('relation').items():
                        tmp(key) = elements(ind)
                tmp.update(sent_id = bla('text')(0)('sent_id'))
                tmp.update(sent_text = bla('text')(0)('sent_text'))
                results('relations').append(tmp)
                extraction(pmid) = extraction.get(pmid, ()) + results('relations')
        else:
           bla = {'text': ()}
    file.close()
    return extraction

The output looks like:

import json
print(json.dumps(extraction('test.txt'), indent=4))

{
    "43171": (
        {
            "subject": "C0686619",
            "predicate": "COEXISTS_WITH",
            "object": "C0279628",
            "sent_id": "1",
            "sent_text": "Distribution of lymph node metastases..."
        },
        {
            "subject": "C0686619",
            "predicate": "PROCESS_OF",
            "object": "C0030705",
            "sent_id": "2",
            "sent_text": "The aim of this study..."
        },
        {
            "subject": "C0085198",
            "predicate": "INFER",
            "object": "C0279628",
            "sent_id": "3",
            "sent_text": "METHODS Between April 2014..."
        },
        {
            "subject": "C0279628",
            "predicate": "PROCESS_OF",
            "object": "C0030705",
            "sent_id": "3",
            "sent_text": "METHODS Between April 2014..."
        }
    )
}

Thanks for any recommendation.