MacOS / Ubuntu / Bootcamp windows seemed to spoil my bootloader

I have a macbook with important facts about it. I also had ubuntu installed on a partition on the hard drive.

Everything worked fine.

Then I proceeded to Bootcamp a Windows partition today, since I wanted to try some of my software.

I followed the instructions, I had a usb, it rebooted and then, once I finished and restarted my macbook, all my partitions were completely in bad shape. Keep booting a black screen or a MacOS Internet recovery screen.

I have no idea what to do and I am terrified of losing my data. Any additional information you can provide.

Here are some images of what the disk utility shows.

enter the description of the image here

enter the description of the image here

enter the description of the image here

Can working with an active Google Form answer sheet spoil something?

I would like to sort rows and remove duplicates while the form is active and the answer sheet receives incoming responses.

Will my work on the sheet while receiving an answer cause possible problems?

web scraping – Python: BeautifulSoup Scrape, blank descriptions for courses that spoil the data

I'm trying to scrape some data from the course https://bulletins.psu.edu/university-course-descriptions/undergraduate/ for a project.

# - * - encoding: utf-8 - * -
""
Created on Mon Nov 5 20:37:33 2018

@autor: DazedFury
""
# Here, we are only importing Beautiful Soup and the library of requests
from bs4 import BeautifulSoup
import requests

# returns an instance of CloudflareScraper
#scraper = cfscrape.create_scraper ()

#URL and text file
text file = open ("Output.txt", "w", encoding = & # 39; UTF-8 & # 39;)
page_link = & # 39; https: //bulletins.psu.edu/university-course-descriptions/undergraduate/acctg/'
page_response = requests.get (page_link)
page_content = BeautifulSoup (page_response.content, "html.parser")

#Array to store URL & # 39; s
URLArray = []

# Find links
for the link in page_content.find_all (& # 39; a & # 39;):
if (& # 39; / university-course-descriptions / undergraduate & # 39; in link.get (& # 39; href & # 39;)):
URLArray.append (link.get (& # 39; href & # 39;))
k = 1

#Parse Loop
while (k! = 242):
print ("Write" + str (k))

completeURL = & # 39; https: //bulletins.psu.edu' + URLArray[k]  

    # this is the url that we have already determined is safe and legal to eliminate.
page_link = completeURL

# here, we get the contents of the url, using the request library
page_response = requests.get (page_link)

# we use the html analyzer to analyze the contents of the url and store it in a variable.
page_content = BeautifulSoup (page_response.content, "html.parser")
page_content.prettify

#Find and print all the text with the label p
paragraphs = page_content.find_all (& # 39; div & # 39 ;, {& # 39; class & # 39 ;: & # 39; course_codetitle & # 39;})
agraph2 = page_content.find_all (& # 39; div & # 39 ;, {& # 39; class & # 39 ;: & # 39; courseblockdesc & # 39;)
j = 0
for i in range (len (paragraphs)):
yes i% 2 == 0:
text_file.write (paragraphs[i].get_text ())
text_file.write (" n")
if j <len (paragraphs 2):
text_file.write ("" .join (sections2[j].get_text (). split ()))
text_file.write (" n")
text_file.write (" n")
yes (paragraphs 2[j].get_text ()! = ""):
j + = 1

k + = 1

#FORMAT
# text_file.write ("

& nbsp;

") # text_file.write (" n n") #Close text file file_file.close ()

The specific information I need is the title of the course and the description. The problem is that some of the courses have blank descriptions, which confuses the order and gives erroneous data.

output.txt

bulletin

I thought about just checking if the description of the course is blank, but on the site, the label & # 39; courseblockdesc & # 39; It does not exist if the course does not have a description. Therefore, when I find all the courseblockdesc, the list does not actually add any elements to the array, so the order ends up messy. There are too many errors in this to fix it manually, so I expected someone to help me find a solution for this.