There are three schools of thought on this.
a. Yes, page rank will pass to the robtos.txt blocked page, it will be lost, find a way not to do it.
b. No, It’s an internal link. The way Page Rank flows around a site prevents it being lost to pages that are banned by robots.txt.
c. John Mueller’s position(he has actually commented on this thread) that no it won’t impact you, but then he goes and muddies the waters saying you’d be better off working on your content instead. Since tech and content teams are able to work in parallel which he well knows, this reasoning is a strawman and not at all useful. It’s impossible to tell if he means the impact is so small, focus elsewhere, or that internal linking to pages blocked by robots txt has zero impact.
He has also gone on the public record saying most of what you read on this topic is dated, wrong etc…so who knows if what he said was true in the first place, or true now?
I don’t know which is right – so I
a. Assume A is correct. I benefit it its right, I cause no harm if it’s wrong.
b. I assume B is incorrect. I benefit if its wrong, cause no harm if its right.
c. I assume John is not going to give a straight answer that ultimately closes the subject. He has stated his opinion about where time might be best spent, but does not absolutely close the door on no benefit passing from stopping link to blocked by robots.txt pages.
So – I don’t link to pages blocked by robots.txt wherever possible.
Also not helpful as he talks about using this to stop duplicate content when its more often used to stop faceted content being indexed, a subset of the main content but not duplicate.