application status: recommended browser backspace behavior for SPA

Our company is building a SPA and we are discussing the best behavior for the browser back button. The SPA is a management tool with many tables and filters. Each time a filter is changed, the URL parameters are updated to reflect the current status.

There are two sides to the argument of what the Back button should do:

  • One side thinks that the Back button should change state. Therefore, every time you change a filter, you must push the new state in the browser history.
  • The other side thinks the back button should change pages. Therefore, filter changes must be ignored and the browser must return to the previous page / view.

Both options have their advantages and disadvantages. Is there a common opinion about what the Back button should do?

Re: [] – Socks5 Cheap Socks5 proxy service

SOCKS Proxy List by
If you need socks5, visit the service and add funds through PM, BTC WMZ, WEX. Thank you all!!
Add background:
Check socks5 online here:
Live | | 0.281 | SOCKS5 | Arizona | 85260 | | United States | Revised at
Live | | 0.284 | SOCKS5 | Arizona | 85260 | | United States | Revised at
Live | | 0.302 | SOCKS5 | Arizona | 85260 | | United States | Revised at
Live | | 0.283 | SOCKS5 | Arizona | 85260 | | United States | Revised at
Live | 103,250,166.17:6667 | 1,714 | SOCKS5 | Unknown Unknown The | Unknown Revised at
Live | | 0.179 | SOCKS5 | South Carolina | 29715 | | United States | Revised at
Live | | 0.282 | SOCKS5 | California | 94043 | | United States | Revised at
Live | | 0.95 | SOCKS5 | Arizona | 85260 | | United States | Revised at
Live | | 0.178 | SOCKS5 | Tennessee | 38551 | | United States | Revised at
Live | | 0.284 | SOCKS5 | Arizona | 85260 | | United States | Revised at
Live | | 0.287 | SOCKS5 | Arizona | 85260 | | United States | Revised at
Live | | 0.284 | SOCKS5 | Arizona | 85260 | | United States | Revised at
Live | | 0.284 | SOCKS5 | Arizona | 85260 | | United States | Revised at
Live | 5,135.58.124:44275 | 0.447 | SOCKS5 | California | 92821 | | United States | Revised at
Live | | 1,109 | SOCKS5 | Unknown Unknown Gtpl Dcpl Private Limited | India | Revised at
Live | | 1,281 | SOCKS5 | Friuli-Venezia Giulia | 34100 | Wind Telecomunicazioni SpA | Italy | Revised at

reviewed by tisocks in
Re: [] – Socks5 Proxy Service Cheap Socks5
SOCKS Proxy List by
If you need socks5, visit the service and add funds through PM, BTC WMZ, WEX. Thank you all!!
Add background:
Check socks5 online here:
Live | | 0.281 | SOCKS5 | Arizona | 85260 | | United States | Revised at
Live | | 0.284 | SOCKS5 | Arizona | 85260 | | United States | Checked in

Classification: 5 5


Web service: is there a way to protect against false messages from a SPA that consumes a web service directly?

I am currently developing a web service and communication can be a bottleneck. It would be at least 100 ms faster if you could access the web service directly from the browser instead of first sending the messages to the consumer's server and transmit them to the web service along with the consumer identifier.

If I store that consumer identifier in the SPA, then everyone could forge a request on behalf of the consumer, they only need to verify what the SPA sends to the web service as a user. Is there any way to protect the web service against this type of false messages?

google – Design patterns for asset loading for SPA using vanilla JS

What are the patterns or design strategies when it comes to loading Assets on demand in SPA while navigating to different sections of a large-scale application at the enterprise level.

I'm very influenced by how gmail works and I think they do not use any framework like angular, etc.

Leaving aside the consistencies of the browser, I want to address and define the management patterns of the SPAs that use natural vanilla JS. What I am trying to discover is the design patterns around it. I know the following, but maybe someone can really dig deeper.

Event: A hash change

  1. Push history status
  2. Loading partial html in the content sections.
  3. Loading css to support the loaded content.
  4. Loading js to support the loaded content.
  5. Loaded js calls remote services to complete dynamic content in partial

What is also unclear to me is the download part, do we eliminate the script and link tags loaded when loading the next part and what are the implications of that?

I am sure that someone expert in the field who has done it repeatedly can deepen the pattern in steps such as

All tutorial links are also very appreciated, it's costing me a lot to find the right content, youtube and google are full of angular html partial elements that I do not want.

I am looking for a deeper understanding and real knowledge to understand what is really happening under the hood, Thank you, but not thanks to the frames that try to save time.

seo – AngularJS based on the indexing of SPA pages

Recently I worked in a portal based on AngularJS. One of the main areas of the portal is to list the job offer available to the organization. At this time, Google indexes some of our job offers. Since jobs expire and new jobs are updated periodically, Google does not always have the most recent data.

When I search if some work links are indexed in the Google search console, some of the work links are, but only the SPA root section is indexed (there is no work content available on the indexed page), some are finding a redirect error and some can not load the JS scripts needed to render the page correctly, even when none of the scripts is hosted externally.

What is the correct way to handle search engine optimization for single-page applications? Do I have to create a parallel processed version of the page that would be served to the crawler?

Do I write a planner to create a sitemap.xml to list all available jobs and push them for analysis? Even in this case, how can I force the removal of obsolete jobs?

China Pool SPA Equipments factory

Commercial air blower SV10-SV30
Compact blowers for air or gas, pressure or vacuum with motors from 1 to 3hp (0.75 to 2.2kw)

Classification of continuous service with quality engines.
.The flowers are certified by CE and can be customized for any application
High temperature designs for up to 50
. Assorted coatings and coatings for corrosive environments.
.Available in 50hz and 60hz.
Available in 220v / 380v

AP series of economical air blower
Air blower applicable in bath or hot tub
Frequently asked questions
1. Q: What is your payment method?
A: T / T, Western Union, MoneyGram, cash accepted. Alibaba commercial warranty
2. Q: Do you have MOQ requirement?
A: 1set
3. Q: Can I know your delivery time?
A: It depends on your order quantity .x20ft for 15 days; 1x40ft for 20 days.
4. Q: In what port will our goods be loaded?
A: Guangzhou, Shenzhen or at your request.
5. Q: Can I use my own forwarder to transport the products for me?
A: Yes, if you have your own promoter in Guangzhou, you can let your promoter send the products for you.

If you have another question, please do not hesitate to contact us as follows:
TEL: 008613982024270
Whatapp: +8613982024270
Skype: solar.zhen
E-mail: [email protected]
Wechat: 13982024270China pool equipment SPA factory
website: http: //

seo – How do search engines behave with canonical URLs between a SPA and a version represented by the server?

I'm planning to build an application that ends up showing duplicate content. # Completely processed on the client, behind a wall of JS # Server-rendering, SEO-friendly

Under I will be serving a React SPA that surrenders completely to the client.

Under I will serve the same React application but rendered on the server, to improve SEO.

The content of both applications will be the same, but presumably the rendered version of the server will have a better SEO. To avoid duplicate content problems, I will follow Google's "Consolidate Duplicate URLs" guide. I will:

  • Specify a preferred domain:
  • To establish a rel = canonical header in my answer for the pages under pointing to the corresponding URL

Here is my Question: What page speed of the application counts for SEO? Is the page speed of the application rendered on the client side (slow due to all the JS that needs to be downloaded in a blocked way) or is it the speed of the server page, the SEO friendly version?

seo – Googlebot can not index my SPA website based on hbs

As part of a school project, I created a single-page application website using rendered handlebars on the server side. However, when you inspect the site with the Google Search Console, all that Google can find is the basic index.html page before the actual content is injected through javascript.

The website is a replacement for an old website (written in 1997). For some reason, Google is still trying to index old and now non-existent links from the old site regularly, while only trying to index new links that work once 2 months ago. It goes without saying that all the links in the domain, in addition to the destination page, are considered duplicates, since they only contain the Google index.html page. The dead links keep popping up when I try to google the domain.

It is worth noting that I have a valid robot.txt file and a site map that shows the valid links on my website. The search console does not report any errors and it seems that it has been reading my script files correctly. With the Google-bot simulator, I can also see the index.html page. For what it's worth, Bing and Yahoo give the same results.

According to all the sources I have found, Google in 2019 should work well with both SPAs. Since I do not have any errors, I do not even know how to begin diagnosing the problem.

Even if you can not solve my problem, even pointing me in the right direction would be greatly appreciated at this time. I feel like I do not have much to go through.

seo – Google Page with pseudo SPA Angular web application

I will explain what I mean by pseudo-spa. It's been almost 2 years since I've been trying to index an Angular SPA project called braapfinder dot com for many hours.

At some point, a few months ago, I realized that the CPU time it takes to render each page is supposed to be killing the range of pages that Google assigns to each page, so I decided to present the main pages to allow the Google crawler reads in a fraction of a second any of those pages on the first hit.

However, I left the home page to start the "real" angular application (not a render) to allow users to enjoy all the features of this project. I assume that most of my traffic will go to internal (pre-rendered) pages

I expected this to increase my range but it is not happening. I worked a lot on html, micro formats, any SEO technique I could, but nothing seems to work.

There is still something there that is not working well. I have two main theories:

A) Anything related to the hreflangs pages (I have 11 languages), to be penalized as some kind of duplicate content?

B) Even if the "important" pages are waiting, the fact that the home page is not, causes the tracker to reach those pages through the Angular SPA, which makes the display of a page " slow "because the CPU load has to be useless.

If someone with knowledge could review this project and tell me if there is something obviously wrong, or some kind of advice, I would feel very happy.

web – State of charge for SPA: skeleton screens, load indicators or both?

This question does not refer to whether the Skeleton screens are better than carrying indicators in a general sense, many of which are in Google. Rather, I would like to have an idea of ​​the best practices for using one or both applications in a web application whose data changes frequently.

Here is the scenario (super simple):

I have a web application that has 2 screens. The application is a "one-page application", which means that navigation, data collection and representation of the user interface are made in the browser without full updates of the page.

The 2 screens:

  1. A master list screen that shows a table, with rows filled with some summary data. You can click on each row and when clicked, the user will go to the second screen described below.
  2. A detail screen that loads a more extensive set of data into a form to edit. After successfully updating the data on the screen, the user is redirected back to the screen of the list with a confirmation that their data was saved successfully.

A requirement of this web application is that the data must be updated from the server every time you enter these 2 screens. Therefore, when passing between the list and the details, the screens should recover the data from the server and show the updated data again. The same applies for the initially deep links to these screens. Another requirement is that the data of the list can be paginated and filtered, going to the server also for any of those requests. Server requests are usually fast in a fast connection (<.5 seconds), but could be slower for any number of reasons.

1) Skeleton screens: I like the idea of ​​showing skeletal screens when initially loading the data from the list or the detail screens. However, by continually moving between the list and the detail screens, would displaying a skeleton screen before each search become annoying and become a distraction? It seems that the examples of skeleton screens on the web are for use cases where the data is initially loaded but does not change much after that, even when the transition between screens is made. In my case, would showing a load indicator on the screen before looking would be a better UX?

2) For on-screen updates, as in the case of filtering and paging data on the list screen, I assume that displaying the skeleton screen would be even more annoying. In this case, I assume that the user would only like to have the data updated in the list when the server returns it. Would this also be the case of simply displaying the load indicator and updating the list immediately when your data is returned from the server?