I found rsync when I wanted to use scp to copy files to a server but not overwrite files already there. Rsync is actually more efficient no matter what (it seems) but it is really great if there are a bunch of duplicate files (Rsync will just skip them).
To copy files from your current computer to a server:
rsync -azv -r ssh [directory] [username]@remotehost:[location]
$ sudo rsync -azv -e ssh directory-on-my-computer/ firstname.lastname@example.org:~/files/go/here
a = archive mode
z = compress file data
v = verbose
r = recurse through subdirectories and copy all of them
Compressing file data saves bandwidth so if that is an issue it is another big win over scp. And in my reading it seems rsync can restart a broken file transfer in process (while scp you have to redo the whole file transfer).
To copy from the server to your computer just reverse the order of the locations. And you can even just put in two addresses not on your current computer and copy between then.
As a reminder, I realize this blog is made up of stuff that is obvious to a large number of people. It is really aimed at me (so I can quickly find what I found before), and to a lessor extent others like me (who use cli some but are not system administrators or programmers to any significant extent).