wordpress – Google Optimize for A/B test: Trying to edit existing HTML but says it’s over the limit even when I test deleting some of it?

I am trying to set up an A/B test of a company’s website for my project. There are certain changes I’d like to make such as changing the ‘favourites’ products on the home page to one with the top selling products which I was trying to do by substituting the existing code simply with the top selling products instead (i.e. their jpg, their links, their names etc).

However I found that when making ANY change (by selecting the element and clicking ‘edit HTML’), even testing it by deleting one character from the original code, that it then pops up with this error notification saying that it’s over the word limit. Even when I put the character back for example, the error message is still there. And it will say weirdly that it is quite a lot of the word limit even though it’s basically the original code! I then have to click cancel every time.

Please see the attached photo for reference (where I deleted one character from the original code to demonstrate how any editing (even deletion of original working code) then comes up with the error that the html is over the limit).

View post on imgur.com

Thanks in advance!

duplicate content – How to optimize SEO for reddit-like sites with many of ouf outgoing links and “copy pastes”

Reddit is quite popular and generates a lot of traffic. However, they must have invented some interesting strategies how to deal with SEO issues.

I mean they have two SEO antipatterns as features.

  • Tonns of outgoing links
  • A lot of Copy & Paste stuff

How do they perform well? Is it only their gravity (and SEO is just as is). Does the amount of unique content (Like comments generated by users) outweigh the rest?
Or any other relevant strategies they or similar sites use?

Is this correct method to optimize local seo?

It’s a local restaurant and I need it to narrow down to city-only search results.

So instead of targeting keywords like "best Asian restaurants" Do I need to use keyword + target city like "best asian restaurants in wellington" like this every time?

I will fix core web vitals issues to optimize your website fcp lcp cls for $10

I will fix core web vitals issues to optimize your website fcp lcp cls

To optimize your website speed and ranking on Google, I will fix the Core Web Vitals Errors (CLS, LCP, FCP) and remove them from Google Search Console. The core web vitals are the aspects of the user experience (loading, interactivity, and visual stability) that are common to all webpages. In Google metrics, these aspects are referred to as the three elements of the user experience. Google updated the core web vital as a ranking factor, so the optimization of Core web vitals is essential for google ranking. In order to rectify the problems with the core web vitals, I will add additional CSS and make some changes to the head section of the page to make it work correctly. You can have your website web vitals optimized with and without plugin. I will use web programming to remove the google pagespeed errors in core web vitals from your website and google search console. There are several core web vitals issues I will resolve.

  1. Issues with LCP
  2. The FCP issues
  3. Issues relating to CLS

Please feel free to contact me before placing an order. Custom Orders are also accepted.

  • Providing 100% Delivery Satisfaction
  • Unlimited revisions.
  • Refund Guarantee

=========ORDER NOW=========

.(tagsToTranslate)seo(t)webmaster(t)core(t)web(t)vitals

mariadb – Unknown column in ‘GENERATED ALWAYS’ when doing optimize table

I’m just running a command:
optimize table some_table_name_here;

But it results with:

Table | Op | Msg_type | Msg_text

db.some_table_name_here | optimize | note | Table does not support optimize, doing recreate + analyze instead

db.some_table_name_here | optimize | error | Unknown column ‘`db`.`t`.`total_area`’ in ‘GENERATED ALWAYS’

db.some_table_name_here | optimize | status | Operation failed

3 rows in set, 1 warning (0.001 sec)

Server version: 10.5.10-MariaDB-1:10.5.10+maria~buster-log mariadb.org binary distribution.

This problem appeared after upgrade from mariadb 10.3 to 10.5 (via 10.4).

dnd 5e – How to optimize an ‘arcane archer’-ish half-elf sorcerer?

Sorcerer has very little to offer an Arcane Archer-type build, with the exception of the Elemental Affinity feature from the Draconic Bloodline origin. Combined with Elemental Weapon, this allows you to add your Cha modifier as elemental damage to every shot you fire from your crossbow. The problem here is that Elemental Weapon is restricted to the Paladin class only.

So you have 3 options, explained below. Regardless of which you take, you’ll be focussing on Cha and Dex, with Con as a secondary as usual. So you’ll want to spend your 27 points in the 15,15,15,8,8,8 pattern. Then Half-Elf brings you up to 17 Cha, 16 Dex, 16 Con very neatly. Each build gets you 5 ability score increases over your career. You’ll want to use these to bring Cha and Dex up to 20 and take Crossbow Expert so that you can dual-wield hand crossbows.

Option 1 is to take at least 9 levels of Paladin so you can cast Elemental Weapon. Then you’ll want to bring Sorcerer up to 8 and Paladin up to 12 so you don’t lose ability score increases. This isn’t a terrible idea, as many of the Paladin’s (X)ing Smite spells work with ranged attacks, making a pretty reasonable Arcane Archer. With the Oath of Devotion’s Sacred Weapon ability, you could also add your Cha modifier to your attack rolls. That said, many of the Paladin’s class features don’t work with ranged attacks, and it’s not exactly an Arcane Archer anyway. Also, you’ll have to have Str 13 to multiclass Paladin; so your starting ability scores will be 15,15,13,12,8,8 with Half-Elf making it 17 Cha, 16 Dex, 13 Str, 13 Con. (You only lose out on Con, so it’s not too bad.) The main thing this option gets you that the other 2 don’t is the cool Smite spells that add damage and additional effects to your crossbow attacks, just like an Arcane Archer.

Option 2 is to take 6 levels of Lore Bard for Additional Magical Secrets, which you can use to gain access to Elemental Weapon. You could use the other magical secret to get Branding Smite for that Arcane Archer flavour. Then, you’ll want to take 5 levels in a martial class for Extra Attack; I’d strongly recommend Fighter, so that you can take Fighter 6 and not lose an ability score increase. Once you’re Sorcerer 6/Bard 6/Fighter 6, take 2 more levels in whichever of these 3 classes you like to get your last ability score increase. This option is the least focussed, but you get the Archery fighting style and Improved Critical for your crossbow, so it’s got a slight edge with the crossbow you’re looking to optimize. This can also get you Arcane Archer-ing the fastest, since your combo comes into play at level 12.

The final option is to take 10 levels of Valor Bard, and use Magical Secrets to get Elemental Weapon. The other magical secret could be used for Banishing Smite, Branding Smite, or Staggering Smite for more Arcane Archer flavour. While this doesn’t sound as good as Lore Bard, you get Extra Attack for free along the way. Then you should probably take Sorcerer up to 8 and Bard up to 12 to keep all your ability score increases. This option gets you the most powerful spellcasting of all of them, but you don’t get the Archery fighting style.

I will create, fix and optimize an eye catching facebook business page professionally for $5

I will create, fix and optimize an eye catching facebook business page professionally

About this Gig Hi, There is a professional Facebook business page creator and problem fixer and I am here to provide you my impressive services with 100% satisfaction reliability. So don’t miss the chance & catch it now. My best services:

  • Create Facebook page
  • Design outstanding cover photo
  • Setup Profile and Facebook Business Page
  • Fix Facebook Page Issues
  • Setup template according to business
  • Add details of your services, Business
  • Add group & linking to the page
  • Add supported social media Tab
  • Add website, contact info, Location map
  • Setup Shop Now button
  • Fully SEO friendly optimize
  • Tab Setup
  • Scheduling Platforms
  • Add auto reply service

Why me?

  • My packages always include optimizing your page to match your business.
  • 100% customer satisfaction.
  • Money-back guarantee.
  • Supporting the customers at any time
  • Quick delivery with quality service
  • Friendly communication
  • Provide reports & screenshots

Hire Me! And grow your business faster… NOTE: Buyer may need to share their personal or business information, location & social media links, E-mail contact info.

.

web applications – How to optimize the request time for large data response?

I have created a dashboard for rendering a list of clients into a DataTable. Below shown is the data model structure:
data_model_structure

When I had a few records in the clients schema, let’s say a thousand rows, the request time was fairly okay. It would take around of 4-5 seconds for the whole trip, from requesting to processing at backend to sending response with data and rendering it to frontend. Once the data reached to 10,000+ rows, the time it’s taking is too much. Now it takes anywhere near 17 seconds or sometimes even more. I’m using Laravel, the Eloquent ORM of this framework brings the data from the related tables (which is highly useful) but as the data is growing it’s increasing the requesting time. My question is what could be a better approach to minimize the request time? How can I decrease the time it takes to request??

JavaScript optimize a function to check if a nested object is empty

My function needs to check if some nested object is empty.
For flat objects, there is no need of recursion, and a very basic function could be the following (inspired to PHP empty):

empty = function (mixedVar) {
      var undef
      var key
      var i
      var len
      var emptyValues = (undef, null, '');
      for (i = 0, len = emptyValues.length; i < len; i++) {
        if (mixedVar === emptyValues(i)) {
          return true
        }
      }
      if (typeof mixedVar === 'object') {
        for (key in mixedVar) {
          if (mixedVar.hasOwnProperty(key)) {
            return false
          }
        }
        return true
      }

      return false
    }

Now, supposed that the object has several levels of nesting, a recursive version may be needed. It worths nothing that this version adds a “empty” array to pass as input what can be considered as an empty:

  /**
     * Check if an object is empty recursively
     * @param {Object} mixedVar 
     * @param {Array} emptyValues 
     * @param {Number} iter current iteratiom, defaults 0
     * @param {Number} deep recursions levels, defaults 3
     * @returns {Boolean} true if object is empty
     */
    emptyDeep: function(mixedVar, emptyValues = (undefined, null, ''), iter=0, deep=3) {
      var i, len
      for (i = 0, len = emptyValues.length; i < len; i++) {
        if (mixedVar === emptyValues(i)) {
          return true
        }
      }
      if (typeof mixedVar === 'object') {
        for (const item of Object.values(mixedVar)) {
          if(iter>=deep) {
            return false
          } else if (!this.emptyDeep(item, emptyValues, ++iter, deep)) {
            return false
          }
        }
        return true
      }
      return false
    }

This function has also iter and deep variables to control the recursion exit guard at some level of nesting, by defaults 3.
The problem is that with very big objects (order of 10 KB) it turns out that this functions becomes very slow as the object size, and nesting levels grow (like deep > 3).
As results the whole Node IO loop will be affected.
As alternative to this, I have tried this approach, that tries to check the empty for “almost” flatten object, at one level of nesting, and without any recursion:

 /**
     * Check if a flatten object is empty
     * @param {*} obj 
     * @returns 
     */
    emptyFlatten: function(obj) {
      if(this.empty(obj)) return true;
      if(Array.isArray(obj)) {
        return obj.filter(val => this.empty(val)).length==obj.length;
      }
      const keys = Object.keys(obj);
      return keys.map(key => obj(key)).filter(val => this.empty(val)).length==keys.length;
    }

that will work for structures like

()
{},
{ A: {}, B: (), C:null, D:undefined }

but not for a 3 levels object like:

{ A: {}, B: { C: () } }

So, how to optimize the function emptyDeep to make it fast for few levels of recursion (let’s say deep <= 3)?

performance tuning – Optimize molecule distance analyzing code

I have a very large dataset (31552000 lines) of xyz coordinates in the following format

1 2 3
4 5 6 
7 8 9
. . . 

I have to take a distance using the special method below.

Distance({a_, b_, c_}, {d_, e_, f_}) := 
 Sqrt((If(Abs(a - d) >= (40/2), Abs(a - d) - 40, Abs(a - d)))^2 + (If(
      Abs(b - e) >= (40/2), Abs(b - e) - 40, Abs(b - e)))^2 + (If(
      Abs(c - f) >= (40/2), Abs(c - f) - 40, Abs(c - f)))^2)

Then I import the data.

data = Partition(
   Partition(ReadList("input.txt", {Real, Real, Real}), 16), 38);

The formatting is kind of strange. Every 16 rows is one molecule, and every 38 molecules is one timestep. I take the distance between the 16th atom of each molecule and the 5th atom of each molecule.Then I select the distances that are less than 5.55 and determine the length of the resulting list. This is repeated for each of the 29,000 timesteps.

analysis =
  Flatten(
   Table(
    Table(
     Length(
      Select(
       Table(
        Distance(data((r, y, 16)), data((r, x, 5))),
        {x, 1, 38}),
       # <= 5.55 &)
      ),
     {y, 1, 38}),
    {r, 1, 29000})
   );

This last section is my most computationally intensive part. For 29000 timesteps and 38 molecules, it takes 40 minutes to process fully. It also takes too much memory (16+ gigs per kernel) to parallelize. Is there any other method that will improve the performance? I have tried using compile, but I realized that Table, the biggest bottleneck, is already complied to machine code.

Below is an example of a dataset that takes my computer 2 minutes to complete with the analysis code. It is scalable to larger timesteps by changing 4000 to larger numbers.

data = Partition(
  Partition(Partition(Table(RandomReal({0, 40}), (3*16*38*4000)), 3), 
   16), 38)