SQL Server 2012: Measuring the latency of a long-distance SQL connection

We have written a simple "client" application for a client that connects to its SQLServer database, executes approximately 360 queries against a "property" and returns that data as a .Net object that we call "cProp".

Queries vary in complexity, but the longest take perhaps 400 ms. When we recently executed it for 60 properties, the complete tour took approximately 3 minutes. The client has decided to move his database to a remote server with 65ms between us. As a result, the same query required 51 minutes.

Now I'm 99% sure that the problem is the recovery of rows, but I need to prove it. The DBA followed the process, but since I had never used it before, I was horrified to discover how little information it contained. What I did see is that it took less than 4 minutes of server time. This reinforces my belief that the problem is in the recovery of the row.

But how do I measure that? The trace returns nothing about the number of rows or the total size of the data. I'm doing it wrong? I also know that batch processing occurs outside the server, so how can we figure out what's going on there? In general terms, how can problems that are occurring "outside the server" be solved?