drush – Why is the –uri option needed to return absolute URLs?

I want to execute a View and export the JSON file to the files directory. I am using Views Data Export and VDE drush add-on modules. My view has a file/PDF field and I want to provide a direct link to the PDF file. So I added a Views relation for the field and a file URI formatter field. When I go to see the output the View creates at the Path. The URL is an absolute link directly to the PDF file. The URL includes the domain name when viewing it on development or production instances. And this is what I want. This is the correct URL path in the JSON:

href=u0022http://mysite.com/files/Allergens.pdf

But when I use this Drush command below provided by views_data_export or vde_drush

drush vde myview myviewid myfile.json 

the domain name is stripped out like this:

href=u0022http://default/files/Allergens.pdf

Why is this happening only when using the Drush command? I found I can fix it by adding this at the end of the command

--uri=mysite.com

So my new drush command looks like this:

drush vde myview myviewid myfile.json --uri=mysite.com

But doing this will hardcode the domain name into the URL. I want the URL to work on development, production and on my local. Thank you for any help you can provide.

8 – Canonical URLs per node

I’m trying to find out how to set a canonical url on certen pages/articles on a drupal 8 installation.
Metatag module, have the option for setting the canonical URL but is a general setting not specific for every page/article.

Does anybody know how to solve this?

Thank you!

brave – How to copy URLs of all open Chrome tabs without an extension

Often when I’m using Chrome or Brave, I’d like to copy all of the URLs of my open tabs into another app such as Notepad.

I realize that many available extensions have this functionality, but I avoid installing extensions because of the security risk.

I figure there must be some native way to achieve my goal, sort of like how How do I copy all file names in a folder to notepad? is a hidden way to copy file names in Windows 10.

How can I copy all of the URLs of my open tabs without using an extension?

seo – Will deleting and re-upload my new site site help Google index it and resolve duplicate URLs that are now fixed?

I uploaded a new site with a domain name https://example.com (actually the domain consists of 3 words concatenated together).

If i do a google search for those three words in Google, the site does NOT appear in the results. In other search engines (the duck one, the tree planting one & the bill gates one) it appears right at the top even though I have not registered the site with them.

Google search console gives me this error “Duplicate, submitted URL not selected as canonical”. & in the report it says

Indexing
User-declared canonical: None
Google-selected canonical: https://www.example.com/

In other words, it has chosen the WWW version instead of the non-WWW version.

I did not know about WWW/non-WWW & HTTP/HTTPS URLs when I made the site last week. Now I do and I think the .htaccess file redirects everything to the non-WWW, HTTPS version. https://index.com/index.html is also redirected to https://example.com in that file.

Aside from this, the sitemap only has 2 entries – one to https://example.com & another to a solitary PDF file. The robots.txt file essentially allows everything. I have put a rel="canonical" link in the index HTML page as follows:

<link rel="canonical" href="https://example.com/" />

I am thinking of deleting the site from the web host, getting Google search console to look for (& not find) the pages & thus return 404. & then re-uploading it. Is doing so sensible or stupid? Are there are any gotchas? If it is not a workable solution, what a workable solution might be?

Tracking users with unique urls

I have a client that wants to track users and see how many times they are viewing certain pages without a user login portal as they think it will deter users from viewing the pages. Am I missing something or is there a way to do this maybe with unique urls or cookies possibly.

Facebook posts containing abbreviated URLs that are malformed when copied

We run a not-for-profit website that allows users to create content, which they then promote on social.

What we’ve found is that, recently, several users have posted to Facebook, but used an invalid URL that contains three dots. E.g. instead of:

www.example.com/news/my-article

They post something like:

www.example.com/.../my-article

When these URLs are clicked, requests come to our site including the “…” portion and get flagged as malicious (due to the”../” part, which resembles a cyber attack attempt). This can then result in these source IPs being blocked. This affects legitimate users, and Facebook’s own bots that make requests to us to scrape additional content.

I have concluded that this is the result of legitimate user error where someone copies and pastes the exact text of a Facebook post, including the URLs that have been abbreviated for display by substituting parts of the URL with “…”.

This seems to have surfaced as a recent issue, so I am wondering if something has changed recently regarding the abbreviation of URLs for display on Facebook.

I am not a Facebook user and can’t find a way to submit an issue, so I wanted to check here if anyone else has a similar issue or has a way to get some feedback to Facebook.

Sharepoint 2013 after upgrade lookup values with modified urls

After upgrading SP 2013 from May 2017 CU to Sept 2019 CU, I have some strange behavior on lookup values.

All the URLs are modified and throws 404 error.

For example:

Correct URL:

http://site.domain.com/_layouts/15/listform.aspx?PageType=4&ListId={dce67316-1627-4b40-a611-4a4e60a3b419}&ID=125&RootFolder=*

Incorrect URL:

http://site.domain.com/u002f/u002fsite.domain.com/u002f_layouts/u002f15/u002flistform.aspx?PageType=4u0026ListId={dce67316-1627-4b40-a611-4a4e60a3b419}u0026ID=125u0026RootFolder=*

Thanks in advance for any support.

security – How to sniff android app connected URLs

I want to intercept a traffic and found out some streaming URLs from an android app. When I tried to use Packet Capture, the app says “please turn off VPN connection” once started. So I can’t capture anything. Is there any other way that I can capture the URLs which are played inside that app with Google’s exoplayer?

Regards.

seo – How to disallow all URLs with GET parameters but those containing paths to JS / CSS files

I need to prevent scanning all URLs with GET params, so I do it like this:

User-agent: *
Disallow: /*?*

or

User-agent: *
Disallow: /*?

At the same time I need to allow URLs with my JS and CSS files which URIs look like:

/wp-includes/js/wp-emoji-release.min.js?ver=5.4.2

So, I tried this one:

User-agent: *
Disallow: /*?*
Allow: /*?ver*

However, it doesn’t work – Google Search Console test says that that emoji file is still blocked. I tried to swap instructions:

User-agent: *
Allow: /*?ver*
Disallow: /*?*

but it’s still blocked. How to allow js/css files?

search engine indexing – For sitemaps, do I need to urlencode URLs with international characters?

We have urls that will contain international characters…

ex: https://example.com/title/oh-wie-schön-ist-panama

Do these special characters need to be URL encoded for the our sitemap for the crawlers?
https://example.com/title/oh-wie-sch%C3%B6n-ist-panama