postgresql – Don't handle more connections or don't have shared memory

I have a golang program that calculates the data in several threads at once, all extracting their Postgres data. The number of threads depends on a previous result. Therefore, there can be hundreds of threads trying to extract data from Postgres at the same time.

The golang sql library allows you to specify a connection limit, which prevents postgres from running out of shared memory or free connections.

If I code the maximum number of connections, I will run out of connections when something else is connected. On the other hand, if I code a too low number of connections allowed from the Golang program performance, it will be unnecessarily limited.

What would be the best way to allow the go program to use as many connections as possible, without encountering the limits? I imagine that this number will be variable depending on the amount of other services that are connected to the database at that time.

I am thinking of running PgBouncer between the database and the golang program, hoping to accept all the connections of the golang program, allowing as much as possible, but blocking the rest until the connections are released. However, I am not sure if PgBouncer does this, but I will try it below.

Is there perhaps another method to have a connection group that blocks connections when there are no real free connections? Blocking, not rejecting, since rejecting a connection will mean that I have to add retry logic to my golang program.

Windows 7: how many images can a Lightroom catalog really handle?

I have an instance of lightroom with 211,489 images.

It is certainly a bit slower than collections of less than 10K, but it is usable. However, it takes a long time to start completely, since LR seems to insist on making sure that all the photo files in the library are really there when you load the libraries, or do some sort of library scanning. Either way, it takes a few minutes until the entire catalog is available.

The GUI responds while scanning the libraries, so you can work while counting the files. Anyway, unless you want to work on a file that has not yet been scanned.


While this is a bit off topic, the best software I've found to manage huge catalogs is Picasa, of all things.

Picasa manages a collection of 600K images that I have without a noticeable slowdown at the beginning. It also seems to make a dynamic upload of everything, so you get a low resolution thumbnail almost instantly, which then improves as you load the actual file.

What I can say, on every platform I have tried, is that importing always seems to take forever. import 100K + images is 10 hours +, if not days. I strongly recommend dividing the import into sections, so if something dies / your computer accidentally shuts down / whatever, you will not lose all your progress.

bitcoind: is it possible to handle BCH transactions in the Bitcoin Core wallet?

I installed Bitcoin core on a server and it works fine for BTC. I would like to know if I can use the same server and platform to support BCH too?
I know that the addresses generated in Bitcoin Core can also be used for BCH. But how if I am able to handle BCH transactions on this machine, how can I differentiate them from BTC some?

Thanks in advance for your answers.

How to handle AdSense ads in a single page SPA application?

I have a SPA (single page application) where I will show AdSense ads.

The main reason I designed as a SPA is because I don't want users to refresh the page to see different content. All routing is done on the client side with Javascript. I'm using React, Firebase Y React-Router.

But in AdSense documents, we have this:

https://support.google.com/adsense/answer/1346295?hl=en

Auto Update Ads

The editors are It is not allowed to update a page or an element of a page without the user requesting an update. This includes placing ads on pages or in locations that redirect or update automatically. Additionally, publishers cannot display ads for a preset time (i.e., pre-roll) before users can view content such as videos, games or downloads.

The fact is that users will NEVER request an update in my application.

What is the correct way to show multiple ads in a single application?

OPTION 1

  • Render a new application only when users change pages.

Example:

  • Users "browse" to: /blog/some-blog-post-slug-A // SEE ADS
  • Users "browse" to: /blog/some-blog-post-slug-B // SEE NEW ADS

Although the page is not technically refreshing, it is like an update of the page, because the URL will change, but all this is done locally. I will represent the blogPost component again, based on the new URL path.

I CAN DO THIS? Do you want new ads based on a client-side route change?

And what happens if my application is a game, and users will spend about 30 minutes on a single screen, playing the game. Am I only allowed to show 1 single ad for the entire 30-minute session? Or can I re-represent it at a specific interval?

c # – How to handle the same code parts in Factory Method?

I have classes:

public abstract class House{
   public string Name {set;get;}
   public SomeClass Property1 {set;get;}
   public OtherClass Property2 {set;get;}
}

public class WoodenHouse:House{
   public string WoodType {set;get;}
   public int WoodAge {set;get;}
}

public class StoneHouse:House{
  public string StoneType{set;get;}
}

And trying to create the Factory Method for this:

    abstract class Creator
    {
        public abstract HouseInfo Info { get; set; }

        public Creator()
        {
        }

        public abstract House FactoryMethod();
    }

    class WoodenHouseCreator : Creator
    {
        public override HouseInfo Info { get; set; }

        public WoodenHouseCreator(WoodenHouseInfo info)
        {
            Info = info;
        }

        public override House FactoryMethod()
        {
            var info = Info as WoodenHouseInfo;

            var woodenHouse = new WoodenHouse();
            woodenHouse.Name = info.Name;
            woodenHouse.Floors = info.Floors;
            woodenHouse.RoofType = info.RoofType;
            woodenHouse.WoodType = info.WoodType;
            woodenHouse.WoodAge = info.WoodAge;
            return woodenHouse;
        }
    }

    class StoneHouseCreator : Creator
    {
        public override HouseInfo Info { get; set; }

        public StoneHouseCreator(StoneHouseInfo info)
        {
            Info = info;
        }

        public override House FactoryMethod()
        {
            var info = Info as StoneHouseInfo;

            var stoneHouse = new StoneHouse();
            stoneHouse.Name = info.Name;
            stoneHouse.Floors = info.Floors;
            stoneHouse.RoofType = info.RoofType;
            stoneHouse.StoneType = info.StoneType;
            return stoneHouse;
        }
    }

Here are classes that contain information to create a house:

    class HouseInfo
    {
        public string Name { set; get; }
        public int Floors { set; get; }
        public string RoofType { set; get; }
    }

    class WoodenHouseInfo : HouseInfo
    {
        public string WoodType { set; get; }
        public int WoodAge { set; get; }
    }

    class StoneHouseInfo : HouseInfo
    {
        public string StoneType { set; get; }
    }

And use:

        var houseInfo = new WoodenHouseInfo{
            Name = "HouseName",
            Floors = 2,
            RoofType = "Triangle",
            WoodType = "Pine",
            WoodAge = 100
        };

        House house;

        if(houseInfo is WoodenHouseInfo)
        {
            var creator = new WoodenHouseCreator(houseInfo);
            house = creator.FactoryMethod();
            Console.Write((house as WoodenHouse).WoodAge);
        }

Full code violin.

My problem is how to handle code duplication. I mean, here are many lines that fill the base House object properties. How can I write that code only once?
Or I shouldn't use Factory Method?

Add populator class

class HousePopulator
    {
        public void PopulateHouse(HouseInfo info, House house)
        {
            house.Name = info.Name;
            house.Floors = info.Floors;
            house.RoofType = info.RoofType;
        }
    }

And use:

abstract class Creator
{
    public abstract HouseInfo Info{get;set;}
    public HousePopulator HousePopulator {get;set;}
    public Creator()
    {
        HousePopulator = new HousePopulator();
    }
    public abstract House FactoryMethod();
}

class WoodenHouseCreator : Creator
{
    public override HouseInfo Info{get;set;}

    public WoodenHouseCreator(WoodenHouseInfo info)
    {
        Info = info;
    }
    public override House FactoryMethod()
    {
        var info = Info as WoodenHouseInfo;

        var woodenHouse = new WoodenHouse();
        HousePopulator.PopulateHouse(Info, woodenHouse);
        woodenHouse.WoodType = info.WoodType;
        woodenHouse.WoodAge = info.WoodAge;
        return woodenHouse;
    }
}

How can I properly handle back-end service events in SignalR?

I have some doubts with the following code that implements a SignalR endpoint that receives Y send messages

Basically the ISendValuesService produces new values ​​and raises a NewValueRegistered event whenever it does. In the event of this event, the SignalR hub sends a message to all connected clients that have registered for the value ID.

However, communication itself is not the problem.

The problem is that customers can call SubscribeToVariables Y UnsubscribeFromVariables during runtime. For each of these calls, a new Hub object is instantiated (according to the design of SignalR). When this happens, the new Hub will also subscribe to the event and any new value will be sent several times to customers, since there are now multiple

I have currently limited the registration to the event with a static bool variable (_eventRegistered in the code below), but this feels a bit awkward.

Is there a recommended pattern to handle this case for Signal? without skipping the record of repeated events in subsequent Hub instances?

 public class ValueHub : Hub
    {
        private ISendValuesService _sendValuesService;
        private IHubContext _hubContext;
        private static bool _eventRegistered;
        private object _eventRegistrationLock = new object();

        public ValueHub(ISendValuesService sendValuesService, IHubContext hubContext)
        {
            _hubContext = hubContext;

            _sendValuesService = sendValuesService;

            lock (_eventRegistrationLock)
            {
                if (!_eventRegistered)
                {
                    _sendValuesService.NewValueRegistered += OnNewValueRegistered;
                    _eventRegistered = true;
                }
            }
        }

        private void OnNewValueRegistered(object sender, NewValueRegisteredEventArgs e)
        {
            _ = Task.Run(() => HandleNewValueTask(e));
        }

        private void HandleNewValueTask(NewValueRegisteredEventArgs e)
        {
            var valueMsg = $""ID":{e.Id},"Value":"{e.Value,3}","Timestamp":"{e.RegistrationTime}"";

            Parallel.ForEach(_sendValuesService.Subscriptions,
                new ParallelOptions() { MaxDegreeOfParallelism = 4 },
                subscription =>
                {
                    if (subscription.Value.Contains(e.Id))
                    {
                        _hubContext.Clients.Client(subscription.Key).SendCoreAsync("NewValue", new() { valueMsg });
                    }
                });
        }

        public async Task SubscribeToVariables(IEnumerable variableIdsToRegister)
        {
            //Save the connectionId since the Context might be disposed before the task is started
            var connectionId = Context.ConnectionId.ToString();

            Debug.WriteLine($"Register variables for connection {connectionId}.");

            await Task.Run(() =>
                           _sendValuesService.SubscribeToVariables(connectionId, variableIdsToRegister)
                        ); 
        }

        public async Task UnsubscribeFromVariables(IEnumerable variableIdsToUnregister)
        {
            //Save the connectionId since the Context might be disposed before the task is started
            var connectionId = Context.ConnectionId.ToString();

            Debug.WriteLine($"Unregister variables for connection {connectionId}.");

            await Task.Run(() =>
                           _sendValuesService.UnsubscribeFromVariables(Context.ConnectionId, variableIdsToUnregister)
                        );
        }

        protected override void Dispose(bool disposing)
        {
            base.Dispose(disposing);
        }
    }
```

dnd 5e – How to handle a player who has two characters when everyone else has one?

You play the new PC as an NPC until it is convenient to get rid of them

The situation, as I understand it, is that your player wanted to recover his old PC, and now they have their old PC back, and therefore, you will be asked to choose, as the other answers indicate, it will simply result in the player Choose your old PC.

The problem, then, is that in any case you have this new PC that technically nobody wants.

Suddenly making them disappear, dying or choosing to leave the adventure could damage the likelihood, since if they are willing to leave now, why did they decide to embark on this quest in the first place? Similarly, a sudden and artificial death just to get rid of them will also damage the immersion.

In other words, you are stuck with this additional PC. But at the same time, you don't want a player to unfairly have two PCs, especially if they probably only care about their previous PC (you can bet they will risk their new PC first in any slightly risky situation, because they may not matter if they die, at least not compared to the old PC they wanted to recover).

So, the best solution I can see is that this new PC becomes an NPC under your control. In this way, none of your players has two PCs, but at the same time the narrative is not affected by the sudden disappearance of this new PC. Then, you can make them leave the party and the story at a point that seems logical, rather than being artificial.

You should probably talk to your player (who created this new PC) about how they would like this new PC to leave. You don't need to rush your game, but it's worth checking with them that what you plan to do with this now-NPC fits the player's expectations. Maybe play in that conflict between the old PC and the new PC, maybe with the group "on the side of" the old PC (if the other players agree with that)?

It is very possible that the player does not care what happens to the new PC now that he has recovered his previous PC, in which case he has the freedom to do what he wants with this new PC so that it is still credible for you and your players .

But the main thing is that, if you control the new PC, nobody unfairly has an additional PC, and you can choose at what point the new PC leaves the party and the plot so that it makes sense in the character and does not damage the narrative.

design: handle multiple similar structured XML in a Java project

We have a case scenario in our project in which we are provided with a set of XSD. We convert these XSD to Java Pojos with the help of JAXB. After this, we were supposed to update some values ​​in the Well and convert them back into the corresponding XML file.
The amount of XSD provided to us is large, they are similar, but the location of the XML tags differs, e.g. The same tag will be present in different elements in different XSD. Therefore, to update any element, we had to write different methods to return the specific XML.
Is there any way or design change by which we can solve the update part?

postgresql – Insert into partitioned tables with select, handle conflicts

I need to insert into partitioned tables using select, how do I handle conflicts? Postgres complains about this:
[0A000] ERROR: ON CONFLICT clause is not supported with partitioned tables

My query looks like this:

INSERT INTO table_partition SELECT * FROM old_table WHERE start_time >= '2014-12-01 00:00:00' AND start_time < '2015-01-01 00:00:00' ON CONFLICT ON CONSTRAINT table_partition_201412_pkey DO NOTHING ;

Currently, I have created partition tables with the previous table that has activators to copy insert / update / delete in partition tables. I have to fill the oldest data in partitions, once this is done, I can drop the oldest tables and start using the partitioned tables, but there is a possibility that some of the data with few date ranges will overlap. It is a problem for me. Could you help me please?

Update to handle the final cycle of Windows Server 2008

As Microsoft has terminated support for Windows Server 2008, I am forced to look for an alternative.

What I need is:
1. History of reliability.
2. Windows 2016 as an option (I don't want to load an ISO).
3. At least 2 virtual cores.
4. At least 4 GB of RAM.
5. At least 80 GB of SSD storage.
6. 3 TB + bandwidth.
7. An automatic backup service that allows me to restore the server (from the previous day, week, month).
8. Access to RDP / console.