ubuntu – Nginx Directory Index is Forbidden

I Have Laravel Rest Api for mobile app running under ubuntu – nginx and every thing is working just fine till today, woke up and users can’t access the api and I check nginx error log and found below

``````2021/04/18 01:21:52 (error) 2772#2772: *138808 directory index of "/var/www/html/mydomain/public/" is forbidden, client: 9x.1x.1x.5x, server: mydomain.com, request: "GET / HTTP/1.>
2021/04/17 23:16:01 (error) 2772#2772: *138792 directory index of "/var/www/html/mydomain/public/" is forbidden, client: 4x.15x.20x.2x1, server: mydomain.com, request: "GET /?XDEBUG>
``````

this is my Nginx config :

``````server {

root /var/www/html/mydomain/public;

# Add index.php to the list if you are using PHP
index index.php;

server_name mydomain.com www.mydomain.com;

location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files \$uri \$uri/ /index.php?\$query_string;
}

# pass PHP scripts to FastCGI server
#
location ~ .php\$ {
include snippets/fastcgi-php.conf;
#
#   # With php-fpm (or other unix sockets):
fastcgi_pass unix:/var/run/php/php7.3-fpm.sock;
#   # With php-cgi (or other tcp sockets):
#   fastcgi_pass 127.0.0.1:9000;
}

# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
location ~ /.ht {
deny all;
}

listen (::):443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}

server {
if (\$host = www.mydomain.com) {
return 301 https://\$host\$request_uri;
} # managed by Certbot

if (\$host = mydomain.com) {
return 301 https://\$host\$request_uri;
} # managed by Certbot

listen 80;
listen (::):80;

server_name mydomain.com www.mydomain.com;
return 404; # managed by Certbot
``````

No one changed any thing on the server side and it was working, what is the issue here

Appreciate any help and ideas this is a live project

Index is empty in a chm file compiled with Microsoft HTML Help Workshop

I am working with Microsoft Html Help Workshop and trying to build a help project which has more than 19k help files including html, css, png etc. I built an hhp file as new.hhp and put folllowing lines in it

``````  (OPTIONS)
Auto Index=Yes
Auto TOC=9
Compatibility=1.1 or later
Compiled file=new.chm
Default Window=TriPane
Default topic=HomePage.htm
Display compile progress=Yes
Error log file=log.log
Full-text search=Yes
Index file=Index.hhk
Language=0x409 English (United States)
(WINDOWS)
TriPane="new",,"Index.hhk",,"HomePage.htm",,,,,0xe2520,255,0x304e,(0,0,800,600),,,,,2,,0
(FILES)
``````

I checked the option to create a binary index as shown here.

I checked the option for including keywords as shown here.

I compiled it as new.chm and it searches for all the required topics but index file is empty as shown here.

I also checked the index.hhk file and it is empty as shown here too..

Its not possible for me to include keywords manually for 19k files. What should I do to get appropriate results?
Thanks for paying attention.

google search console – Your Sitemap or Sitemap index file doesn’t properly declare the namespace

google search console – Your Sitemap or Sitemap index file doesn’t properly declare the namespace – Webmasters Stack Exchange

haskell – Is there a way to count the number of occurrences of the first index of a list of tuples

I am trying to write a function in Haskell that takes in a list of tuples (the first index of each tuple is an int and the second index a char) and an integer and will return the number of occurrences in the first index of each tuple. So far I have:

``````counter :: Eq a => (a) -> a -> Int
counter () find = 0
counter ys find = length xs
where xs = (xs | xs <- ys, xs == find)
``````

For example, if I run:

`counter ((3,"a"),(4,"b"),(2,"a"), (3, "f"),(3,"t")`

This should return 3 since there are 3 tuples in the list where the first index is 3.

Time-Complexity Verification: Code with two loops with an index halved at each iteration

I have the following code in python and was asked to find the tightest upper-bound in terms of Big-O , I’ve done two attempts below and I don’t know which one is right, can you help me verify as to which one is the right answer/approach?

``````def f1(L):
n = len(L)
while n > 0:
n = n // 2
for i in range(n):
if i in L:
L.append(i)
return L
``````

My attempts:
Approach 1:
While loop runs $$log(n)$$ times. And at the $$ith$$ iteration the for-loop runs $$frac{n}{2^i}$$ times; inside the for-loop the conditional runs at most $$O(n)$$ times ( because “in” has complexity of $$O(n)$$ according to https://wiki.python.org/moin/TimeComplexity ). Thus, the time-complexity of the for-loop is $$O(n^2)$$. So the time-complexity of total code is: $$sum_{i=1}^{log(n)} O(frac{n^2}{2^i}) = O(sum_{i=1}^{log(n)} frac{n^2}{2^i} ) = O( n^2 cdot frac{1-(1/2)^{1+log(n)}}{ 1-(1/2)^{log(n)} } ) = O(n^2)$$

Approach 2:
In the for-loop we have the conditional “if i in L” , the “in” costs $$O(n)$$, thus time-complexity of for-loop is $$sum_{i=1}^{n} O(n) = O( sum_{i=1}^{n} n ) = O(n^2).$$ Looking at the while loop we see that “n” is halved at each iteration because of the statement “n=n//2” . Denote $$n_k = lfloor frac{n}{2^k} rfloor$$ as the value of $$n$$ at the k-th iteration; Disregarding the floor function ( we won’t care about $$pm 1$$ for the value of $$n_k$$ since we care about time-complexity ), we’ll seek the smallest $$k$$ ( we denote $$k$$ as the iteration of the while loop ) where $$n_k = 1 leq frac{n}{2^k} iff k leq log(n)$$. Hence the total time complexity of code is $$sum_{i=1}^{log(n)} O(n^2) = O(log(n)) cdot O(n^2) = O(n^2 cdot log(n) )$$

postgresql – Understanding postgres query planner behaviour on gin index

need your expert opinion on index usage and query planner behaviour.

``````d orders
Partitioned table "public.orders"
Column            |           Type           | Collation | Nullable |                   Default
-----------------------------+--------------------------+-----------+----------+----------------------------------------------
oid                         | character varying        |           | not null |
user_id                     | character varying        |           | not null |
tags                        | text()                   |           | not null |
category                    | character varying        |           |          |
description                 | character varying        |           |          |
order_timestamp             | timestamp with time zone |           | not null |
.....
Partition key: RANGE (order_timestamp)
Indexes:
"orders_uid_country_ot_idx" btree (user_id, country, order_timestamp)
"orders_uid_country_cat_ot_idx" btree (user_id, country, category, order_timestamp desc)
"orders_uid_country_tag_gin_idx" gin (user_id, country, tags) WITH (fastupdate=off)
"orders_uid_oid_ot_key" UNIQUE CONSTRAINT, btree (user_id, oid, order_timestamp)
``````

I have observed the following behaviour based on query param
when I run the following query,
`select * from orders where user_id = 'u1' and country = 'c1' and tags && '{t1}' and order_timestamp >= '2021-01-01 00:00:00+00' and order_timestamp < '2021-03-25 05:45:47+00' order by order_timestamp desc limit 10 offset 0`

case 1:
for records with t1 tags where `t1` tags occupies 99% of the records for user u1, 1st index `orders_uid_country_ot_idx` is picked up.

``````Limit  (cost=0.70..88.97 rows=21 width=712) (actual time=1.967..12.608 rows=21 loops=1)
->  Index Scan Backward using orders_y2021_jan_to_uid_country_ot_idx on orders_y2021_jan_to_jun orders  (cost=0.70..1232.35 rows=293 width=712) (actual time=1.966..12.604 rows=21 loops=1)
Index Cond: (((user_id)::text = 'u1'::text) AND ((country)::text = 'c1'::text) AND (order_timestamp >= '2021-01-01 00:00:00+00'::timestamp with time zone) AND (order_timestamp < '2021-03-25 05:45:47+00'::timestamp with time zone))
Filter: (tags && '{t1}'::text())
Planning Time: 0.194 ms
Execution Time: 12.628 ms
``````

case 2:
But when I query for tags value `t2` with something like tags && `'{t2}'` and it is present in 0 to <3% of records for a user, gin index is picked up.

``````Limit  (cost=108.36..108.38 rows=7 width=712) (actual time=37.822..37.824 rows=0 loops=1)
->  Sort  (cost=108.36..108.38 rows=7 width=712) (actual time=37.820..37.821 rows=0 loops=1)
Sort Key: orders.order_timestamp DESC
Sort Method: quicksort  Memory: 25kB
->  Bitmap Heap Scan on orders_y2021_jan_to_jun orders  (cost=76.10..108.26 rows=7 width=712) (actual time=37.815..37.816 rows=0 loops=1)
Recheck Cond: (((user_id)::text = 'u1'::text) AND ((country)::text = 'ID'::text) AND (tags && '{t2}'::text()))
Filter: ((order_timestamp >= '2021-01-01 00:00:00+00'::timestamp with time zone) AND (order_timestamp < '2021-03-25 05:45:47+00'::timestamp with time zone))
->  Bitmap Index Scan on orders_y2021_jan_to_uid_country_tag_gin_idx  (cost=0.00..76.10 rows=8 width=0) (actual time=37.812..37.812 rows=0 loops=1)
Index Cond: (((user_id)::text = 'u1'::text) AND ((country)::text = 'c1'::text) AND (tags && '{t2}'::text()))
Planning Time: 0.190 ms
Execution Time: 37.935 ms
``````
1. Is this because the query planner identifies that since 99% of the records is covered in case 1, it skips the gin index and directly uses the 1st index? If so, does postgres identifies it based on the stats?

2. Before gin index creation, when 1st index is picked for case 2, performance was very bad since index access range is high. i.e number of records that satisfies the condition of user id, country and time column is very high. gin index improved it but i’m curious to understand how postgres chooses it selectively.

3. `orders_uid_country_cat_ot_idx` was added to support filter by category since when gin index was used when filtered by just category or by both category and tags, the performance was bad compared to when the btree index of `user_id, country, category, order_timestamp` is picked up . I expected `gin` index to work well for all the combination of category and tags filter. What could be the reason? The table contains millions of rows

sql server – How to view size of clustered index for a given table?

You can use the system stored procedure `sp_SpaceUsed` to get the size of the Table which is going to be the size of your clustered index since the clustered index is the logical storage of the Table itself.

Example syntax: `EXEC sp_SpaceUsed 'UserActions';`

Specifically you’d want to look at the `data` column of the result set since that will tell you the total size of the Table itself which is the clustered index size. (The `index_size` column is the total space consumed by all indexes, so could give different results depending on if you’re using other indexes such as nonclustered indexes too.)

The index size per row is going to vary on each row (dependent on if you have varying sized data types or nullable columns), but you can get an average per row by dividing that `data` column by the `rows` column the above procedure returns.

sql server – Table scan instead of index seeks happening when where clause filters across multiple tables in join using OR

We have an application generated query using a view that has two tables joined on a LEFT OUTER join. When filtering by fields from just one table (either table) an index seek happens and it’s reasonably fast. When the where clause includes conditions for fields from both tables using an OR the query plan switches to a table scan and doesn’t utilize any of the indexes.

All four fields that are being filtered on are indexed on their respective tables.

Fast query plan where I filter on 3 fields from one table: https://www.brentozar.com/pastetheplan/?id=Hym_4PRSO

Slow query plan where I filter on four fields…three from one table and one from another table: https://www.brentozar.com/pastetheplan/?id=r1dVNDRHO

Ideally I would like to understand why this is happening and how to nudge the query engine to utilize all the indexes.

c# – Index Value Converter Optimisation

I’ve created an IValue Converter in order to Bind to the index value of a Collection.

However it seems to be rather slow/infefficient and I wonder if there is a better/different way to do this.

``````public class IndexValueConverter : IValueConverter
{
public object Convert(object value, Type targetType, object parameter, CultureInfo culture)
{
try
{
CollectionView collectionView = (CollectionView) parameter;
IList collection = (IList) collectionView.ItemsSource;

int convertedValue = collection.IndexOf(value) + 1;
return convertedValue;
}
catch (Exception e)
{
Debug.WriteLine(e);
return -1;
}
}

public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture)
{
throw new NotImplementedException();
}
}
``````

XAML

``````        <CollectionView
x:Name="JobPins"
ItemsSource="{Binding ACollectionView}">
<CollectionView.ItemTemplate>
<DataTemplate x:DataType="ADataModel">
<Label
BackgroundColor="Black"
Style="{StaticResource ListLabels}"
Text="{Binding .,
Converter={StaticResource IndexConverter},
ConverterParameter={x:Reference Name=JobPins}}"
TextColor="WhiteSmoke" />
</DataTemplate>
</CollectionView.ItemTemplate>
</CollectionView>
``````

Add new index for search vendor in magento2

How to achieve the new functionality of searching vendors in catalog search?