Why is Netflix so successful when it’s so rubbish?

Netflix was the first major company to embrace streaming.  It was THE go-to place for streaming, but Hollywood was slow to recognize the value of streaming and contractually most of the best content was not and is STILL not available on streaming.

Netflix recognized early that once Hollywood realized streaming’s value, they would ALL want their own services.  So it began investing in content creation.  It spent RIDICULOUS amounts of money to try to create it’s own content since it was inevitable that third party content would be increasingly hard to get.

Netflix was right about all of that.  Unfortunately, it was so eager to create content that it greenlit just about ANY script that came it’s way.  A normal TV channel has finite air.  A movie studio can only release so many movies to theaters.  That means they have to select the cream of the crop (in their flawed opinion).  Netflix didn’t have that limitation, so it bankrolled a lot of REALLY bad content.

But it’s not all bad.  Santa Clarita Diet and Orange is the New Black and Ozark and Stranger Things are legitimately good content.  There is some quality there.  It’s just that there’s a lot of REALLY bad stuff mixed in.

Have there been a Bitcoin or Cryptocurrency related lottery app or website that was successful?

Doing some research on decentralized gambling and we all know about the dice sites, but has there been any lottery applications or websites that take in users funds and then redistribute them back out if they guess the numbers correctly whereby winning other users money?

  • If so, how big did the jackpots get in this scenario?

  • What are the regulations behind running a public lottery in Bitcoin/altcoin?

  • Which one is the longest running lottery bitcoin site thus far?

  • Have there been solutions to create a decentralized lottery system where the middle man is not needed?

uefi – Grub 2.04 install says it’s successful but drops into the rescue shell

Our hardware is successfully running with UEFI/GRUB 2.02. The internal disk (/dev/sda) is GPT and the first partition starts at 1Mib, is 5Gb, and formatted as vfat. We are running Gentoo and using overlayfs. I need to update it to 2.04 so we can add Secure Boot.

I successfully used ebuild/emerge and updated GRUB. There are no errors during the install, but when the system reboots and runs GRUB it drops into the rescue shell with an “error: no such partition”. If I copy all the files to their appropriate place in the ‘real’ /boot/grub, I get error: symbol grub_file_filters not found.

Prior to rebooting I’ve checked: – the disk partitions and they look correct – Dumped the first 4 LBA’s of the disk and they appear to be a gpt partition table – efibootmgr -v displays what appears to be the correct info: EFIBOOTBOOTX64.EFI

The /dev/sda1/ mounted as /boot has the EFIBOOTBOOTX64.EFI file as well as the grub/x86_64-efi/* and grub/grub.cfg files.

After basic install:
The ‘ls’ command in the rescue shell gives (hd0) (hd1) (hd2) (hd3) (hd4) (hd5)
The internal boot drive is on (hd4).

After I copy all the relevant files from ‘boot/boot.g1_n2/EFI|grub’:
The ‘ls’ command in the rescue shell gives (hd0) (hd1) (hd2) (hd3) (hd4) (hd4,gpt3) (hd4,gpt2) (hd4,gpt1) (hd5)
The internal boot drive is still on (hd4)

We have a terrible, awkward debug environment that involves a minimal linux install environment and PXE booting every ISO. I feel like I’ve tried what I’ve found via google, but I’m sure I’ve missed something.

I’ve tried following https://wiki.archlinux.org/index.php/GRUB#UEFI along with various other things including answers on this site.

I’m not sure what else to try. Any other ideas?

What are some good examples of successful search engine marketing?

Hello friends,

What are some good examples of successful search engine marketing?

What Are Some Good Examples Of Successful Search Engine Marketing?

 

8 – How to ensure successful update after system failure during updb causing the site to remain in maintenance mode?

When running subsequent drush updb that says “no updates required” does this guarantee the previous updates were successful.

When Drush queues up batch items to be processed for drush updb, it adds these operations:

  • Any module schema updates
  • Any entity definition updates (off by default, not supported after 8.7)
  • A cache clear & post hook update functions (on by default)
  • Toggle off maintenance mode (if it not already enabled)

If the only updates were module schema updates then you might be in a scenario where things are ok; there are no post-update hooks needed (e.g. some batch data ops following a schema change) and only the last step of toggling off maintenance mode didn’t complete.

How do I ensure the integrity of the system when this happens without restoring a backup and running the updates again?

You’d have to audit the list of updated modules to double-check each schema update was applied and there were no post-update hooks missed in the update.

Depending on your update scenario this may or may not be easy, hence the best practice is doing a DB rollback and re-applying updates to ensure data integrity when uncertain.

❓ASK – How to make a successful business with a lower investment? | Proxies123.com

Earnings Disclaimer:  All the posts published herein are merely based on individual views, and they do not expressly or by implications represent those of Proxies123.com or its owner. It is hereby made clear that Proxies123.com does not endorse, support, adopt or vouch any views, programs and/or business opportunities posted herein. Proxies123.com also does not give and/or offer any investment advice to any members and/or it’s readers. All members and readers are advised to independently consult their own consultants, lawyers and/or families before making any investment and/or business decisions. This forum is merely a place for general discussions. It is hereby agreed by all members and/or readers that Proxies123.com is in no way responsible and/or liable for any damages and/or losses suffered by anyone of you.

Do Americans typically get fingerprinted (or other biometric checks) upon successful visa free entry to the UK?

I suppose the different entry channels might imply different answers, but the more information, the better.

command line – Successful installation of module using pip3, but module not found when try to import in python

I am a bit new to using the command line, so I apologize if this question is rather basic. I believe a have installed a module called lmfit using

pip3 install lmfit

and it says this was successful. However, I still get a

ModuleNotFoundError: No module named 'lmfit'

when I try to import lmfit in a python script. I’ve tried to check if it was really installed using

pip3 show lmfit

and this gives the location of the module in a folder called python3.8 on my local computer.

Any advice would be appreciated!

python – Start task as soon as multiple tasks are successful

I write an hobbyist application that collects soccer-data from different sources around the www and aggregates it for a graphical representation and sends it out as a tweet. Now I want to fully automate this process in a more or less robust way (more or less because 1. this is a hobby and 2. depends on scraped 3rd party websites).

I have problems to wrap my head around how this full automation can be organized. The following tasks need to be done:

  1. check once daily if and when there are soccer matches I am interested in
  2. start collecting data of these matches from all sources as soon as match is
    over
  3. retry collecting data every x minutes if it wasn’t available yet (the sources have very different speed in releasing the data and it may take 24hrs. sometimes)
  4. as soon as all sources have been collected successfully launch aggregation and tweet.

My question:
I don’t know how to implement the connection between 3. and 4. in a good way. I think I need some wrapper which is called by the scheduler as soon as a match is over, but this doesn’t seem to be a robust way:

class MatchCollector:

    def __init__(self,teams,sources=settings.sources):
        self.teams=teams
        self.sources = sources

    def execute(self):
        finished = False
        counter = 0
        while not finished:
            finished = True
            for source in self.sources:
                s = SourceCollector(source,self.teams)
                successful = s.execute() 
                if not successful:
                    finished = False
            time.sleep(600)
            counter += counter
            if counter == 10000 # arbitrary chosen, have to make calc, when would be a good point to give up
                 raise IterationException('Tried for x hours without success. Somethings broken') 
        d = DataAggregator(sources)
        d.execute()

Am I missing something problematic, by doing this?