Linux help - mass copy of files via ftp

Greebo

N9017H - C172M (1976)
Joined
Feb 11, 2005
Messages
10,976
Location
Baltimore, MD
Display Name

Display name:
Retired Evil Overlord
I can not for the life of me remember.

I know there is a way to transfer a big batch of files - a whole directory structure if you so choose - via ftp using something better than mput and mget.

So what is it?

I'm trying to decide what method to copy everything would be better. The biggest batch of files to move are the attachments (2G worth), and zipping them doesn't reduce them much (bout 10%). So i'm not sure whether I'd rather do tar/gzip and a single ftp or a massive push of everything using a batch upload.

help? ;)
 
Use the tar command. Create a tarball file, then FTP the file out.

Should work on Linux. Does work on other versions of *ix that I've used.
 
two options: wsftp or similar gui program that does this.

or use tar/zip
 
Last edited:
*sigh*

From the command line:

Code:
FTP  ftp.sitename.com
>bin
>hash
>prompt [I](allows mass transfers)[/I]
>cd [I]<remote directory>[/I]
>ls 
>cd .. [I](etc)[/I]
>pwd [I](shows path)[/I]
>lcd [I]<local directory>[/I]
>mput  *  [I] (send ALL files, depending on FTP version will include subdirs.  mget recieves.)[/I]

The transfer of subdirs doesn't always work across all platforms.

The easy way is to use a GUI FTP client like cuteftp http://www.cuteftp.com/cuteftp/
 
I'm not using wsftp to transfer between two linux boxes. I'd use twice the time in d/l and upload.

I want to push from one linux box to another.

So I'll tarball it then zip it then send it.
 
Do you have SSH access on the new server? If so--use scp

scp -r /local/path username@newserverip:/remote/path

Otherwise..just tar it:

tar -zcf test.tar.gz test/

To untar it:

tar -zxf test.tar.gz

If you are using tar over ssh and the directory has lots of files..Do not use verbose (-v). Often the rate that the tar compresses/uncompresses is the rate that the verbose output displays to you.

I wouldn't mess with trying to FTP all the files. You're likely to have problems with the files not correctly transferring as binary or ascii depending on their type. If you want to use FTP--tar it up.

Also consider using -p so that your permissions will be preserved. Sometimes you don't want your access times to change either..If that's the case you'll want --atime-preserve.

A complete seamless migration that no one would notice could be done. If you have root access you could setup both MySQL servers to do replication. This way the new server's SQL data would be the same as this one and you wouldn't have to shut this server down for the move. Always available to help...
 
Last edited:
Do you have SSH access on the new server? If so--use scp

scp -r /local/path username@newserverip:/remote/path

Otherwise..just tar it:

tar -zcf test.tar.gz test/

To untar it:

tar -zxf test.tar.gz

If you are using tar over ssh and the directory has lots of files..Do not use verbose (-v). Often the rate that the tar compresses/uncompresses is the rate that the verbose output displays to you.

I wouldn't mess with trying to FTP all the files. You're likely to have problems with the files not correctly transferring as binary or ascii depending on their type. If you want to use FTP--tar it up.

Also consider using -p so that your permissions will be preserved. Sometimes you don't want your access times to change either..If that's the case you'll want --atime-preserve.

A complete seamless migration that no one would notice could be done. If you have root access you could setup both MySQL servers to do replication. This way the new server's SQL data would be the same as this one and you wouldn't have to shut this server down for the move. Always available to help...

does ANYBODY understand ANYTHING he just said??? :dunno:

I think hes just trying to sound smart!
 
Do you have SSH access on the new server? If so--use scp

scp -r /local/path username@newserverip:/remote/path
Ahh, perfect. I knew there was a command.

Otherwise..just tar it:

tar -zcf test.tar.gz test/

To untar it:

tar -zxf test.tar.gz

If you are using tar over ssh and the directory has lots of files..Do not use verbose (-v). Often the rate that the tar compresses/uncompresses is the rate that the verbose output displays to you.
Thousands of files, over 2G worth. Attachments here are stored in the file system, not in the DB.

I wouldn't mess with trying to FTP all the files. You're likely to have problems with the files not correctly transferring as binary or ascii depending on their type. If you want to use FTP--tar it up.
Amen.

Also consider using -p so that your permissions will be preserved. Sometimes you don't want your access times to change either..If that's the case you'll want --atime-preserve.
Are these tar options or scp options?

A complete seamless migration that no one would notice could be done. If you have root access you could setup both MySQL servers to do replication. This way the new server's SQL data would be the same as this one and you wouldn't have to shut this server down for the move. Always available to help...


Sadly, no root on the destination. The test run will be on a mediatemple grid server. I have SSH but no root access.
 
does ANYBODY understand ANYTHING he just said??? :dunno:

I think hes just trying to sound smart!

Yep, Jesse's right on the money.

I'd guess it's faster to make a tarball as opposed to SQL replication, but I've never tried it over an internet connection.
 
does ANYBODY understand ANYTHING he just said??? :dunno:

I think hes just trying to sound smart!

Yes but then I have spent far to many hours of my life behind a Unix command line doing file moves and setting up chron jobs. An essential part of wireless telecommunication is knowing how to use the VI editor in unix and just being familiar with that old language.
 
Yes but then I have spent far to many hours of my life behind a Unix command line doing file moves and setting up chron jobs. An essential part of wireless telecommunication is knowing how to use the VI editor in unix and just being familiar with that old language.

real men use EMACS.

ducks and runs
 
jesse and other linux gurus:

Is there a way that I could copy just about everything up now, file wise, in advance, and then when ready for the final switchover, compare what's on here filewise (attachments and gallery being the only dirs likely to change) with whats on the new site and just update the files that need it, via command line?

The copy everything up as it is now is the easy part. It's the compare and synchronize part that I need help with.

My idea is, shut down PoA here (forums closed message), backup, transfer and restore the database. Synchronize the few files that would have changed or been added, then send the DNS change up to the domain servers.

Ideally, PoA would only be down for a couple of hours before the new server would begin popping up on ppl's computers.
 
TAR has an update function, but IME it's slow. I *think* that function only applies to the archive, not the extraction, I've never used it the other way.
 
TAR has an update function, but IME it's slow. I *think* that function only applies to the archive, not the extraction, I've never used it the other way.

That would, I think, mean reuploading the whole tar tho.

I'm thinking more a secure link between the servers while comparing the directories directly.


The TAR won't be small. 2G min.
 
VI - the video game that comes with unix. Mission is to get in and out without messing up your file. Do that then you get to move on to EMACS.

former HPUX, AIX system manager before and GUIs.
 
*ahem*.

Can we keep this on topic please.
 
Are these tar options or scp options?

-p works for both...Although with scp -p will preserve permissions, modification times, and access times.

wsuffa said:
I'd guess it's faster to make a tarball as opposed to SQL replication, but I've never tried it over an internet connection.
SQL replication would be good for doing a complete seamless migration. In theory you could move POA to another server with no one noticing. This way Scott could survive the move since he needs to post 8 times per second.

Since the new setup doesn't give you root with complete control over the setup this is not an option.

Greebo said:
Is there a way that I could copy just about everything up now, file wise, in advance, and then when ready for the final switchover, compare what's on here filewise (attachments and gallery being the only dirs likely to change) with whats on the new site and just update the files that need it, via command line?

The copy everything up as it is now is the easy part. It's the compare and synchronize part that I need help with.

My idea is, shut down PoA here (forums closed message), backup, transfer and restore the database. Synchronize the few files that would have changed or been added, then send the DNS change up to the domain servers.

Ideally, PoA would only be down for a couple of hours before the new server would begin popping up on ppl's computers.

Rsync is what you want. You can do it over SSH so it's encrypted. Some information on it here:
http://everythinglinux.org/rsync/

Check and see if you can access the "rsync" command on Media Temple. If not--we could probably upload a rsync binary to media temple...

I would think one could do this move in much less than a couple of hours. Here is a rough plan:
  1. Set your DNS TTL to 10 seconds in advance.
  2. Rsync all your data over in advance.
  3. Now...shut down the forum
  4. Dump the DB, transfer it server to server, and import.
  5. Run Rsync again to sync data
  6. Bring up new POA Website
  7. Set DNS A records to new server ip
  8. Now.. you're set and people should resolve the new IP. You could do some extra trickery on the old POA server by forwarding any connection to port 80 to the new server's ip. iptables would be the best way to accomplish that. But really--with a real low TTL there isn't a huge need.
 
rsync is available on both.

Reading up on it now.

And "a couple hours" was my outside estimate, btw. This would allow me ample time to backup, upload, restore, etc to bring everything up to date.
 
tar and rsync native are probably the better options, but another option is CVSUP. CVSUP is built on top of rsync. FreeBSD uses it as a primary means of pulling software updates. I've found it to be pretty slick in that application.

http://www.cvsup.org/
 
I think rsynch is cool but more than I feel like tackling for this one time job.

tar ftw...

of course there are a few gallery images that want to give me grief...
 
Check and see if you can access the "rsync" command on Media Temple. If not--we could probably upload a rsync binary to media temple...

I would think one could do this move in much less than a couple of hours. Here is a rough plan:
  1. Set your DNS TTL to 10 seconds in advance.
  2. Rsync all your data over in advance.
  3. Now...shut down the forum
  4. Dump the DB, transfer it server to server, and import.
  5. Run Rsync again to sync data
  6. Bring up new POA Website
  7. Set DNS A records to new server ip
  8. Now.. you're set and people should resolve the new IP. You could do some extra trickery on the old POA server by forwarding any connection to port 80 to the new server's ip. iptables would be the best way to accomplish that. But really--with a real low TTL there isn't a huge need.

Jesse's got it nailed.

I think rsynch is cool but more than I feel like tackling for this one time job.

tar ftw...

of course there are a few gallery images that want to give me grief...


There isn't much to tackle.

[chuck@currentserver] rsync -av --delete -e ssh /path/to/copy/from/ username@example.com:/path/to/copy/to/

(note: the above line wrapped....but it should all be on one line)

It'll prompt you for your ssh password and it'll start syncing.

One "thing to know"

If you end the copy-from path with a '/' it will copy the contents of the directory to the copy-to path. If you don't it will copy the directory+contents to the copy-to path.

Code:
 Example:

[chuck@currentserver] cd /copy
[chuck@currentserver] ls from
dir1 dir2
[chuck@currentserver] rsync -av /copy/from /copy/to
[chuck@currentserver] ls to
from
[chuck@currentserver] ls to/from
dir1 dir2

VS.

[chuck@currentserver] cd /copy
[chuck@currentserver] ls from
dir1 dir2
[chuck@currentserver] rsync -av /copy/from[B]/[/B] /copy/to
[chuck@currentserver] ls to
dir1 dir2
 
Well what was scaring me off was the whole running rsynch as a server aspect of it.

Are you saying I dont need to do that?
 
Well what was scaring me off was the whole running rsynch as a server aspect of it.

Are you saying I dont need to do that?

No, no service necessary. If you have ssh access and the rsync binary at each end...you just run the command and it'll sync the contents of the directories.
 
Hm.

So I should be on *this* server now when I run rsynch, and tell it to connect to the future remote server as the target.

Now here's a question for you...

The future server currently has a host name like Q19123.mygrid.com (not a real name).

My login for ssh, however, is greeboissoawesome@pilotsofamerica.com (again, bogus)

The command: rsync -av --delete -e ssh /httpdocs/poa greeboissoawesome@pilotsofamerica.com:~/newhttpd is telling rsync to connect to pilotsofamerica.com, right?

so how do I force it to connect to Q19123.mygrid.com but log me in as greeboissoawesome@pilotsofamerica.com ?
 
Ooh wait:
Code:
rsync -av --delete -e ssh /httpdocs/poa greeboissoawesome@pilotsofamerica.com@q19123.mygrid.com:~/newhttpd
?
 
Hmm, it takes a LONG time for rsync to build the file list when there's as many files as we've got. ;)
 
save a step and concat tar and (g)zip

to tar and zip, use tar cvf - foodir | gzip > foo.tar.gz

on the other end, use gunzip -c foo.tar.gz | tar xvf -

or alternatively:

gunzip < foo.tar.gz | tar xvf
 
save a step and concat tar and (g)zip

to tar and zip, use tar cvf - foodir | gzip > foo.tar.gz

on the other end, use gunzip -c foo.tar.gz | tar xvf -

or alternatively:

gunzip < foo.tar.gz | tar xvf

You're working too hard. The newer versions of tar know how to compress and uncompress.

-Z
--compress
--uncompress Filter the archive through compress(1).
-z
--gzip
--gunzip Filter the archive through gzip(1).
 
You're working too hard. The newer versions of tar know how to compress and uncompress.
new versions? unheard of around here ... :) I currently have xterms open in Solaris 5.7 and 8, AIX 5.3, HPUX11.11 and assorted other crap ...
 
Hmm, it takes a LONG time for rsync to build the file list when there's as many files as we've got. ;)
rcsync does take a long time (depending on numbers of files, as you noted) ... the beauty of it is it's quick keeping up on the deltas
 
I think I'd rather finish learning Japanese or move on to Korean.
 
That would, I think, mean reuploading the whole tar tho.

I'm thinking more a secure link between the servers while comparing the directories directly.


The TAR won't be small. 2G min.

Any reason you couldn't select files by timestamp newer than the time on the first (full) tar into a separate tar? Then just overwrite the first batch with the second (much smaller) one and presto, the latest copies will survive. You could even do this recursively with the tar's getting smaller and smaller as you go if the second tar took longer to transport than you want to freeze the originating system.
 
Any reason you couldn't select files by timestamp newer than the time on the first (full) tar into a separate tar? Then just overwrite the first batch with the second (much smaller) one and presto, the latest copies will survive. You could even do this recursively with the tar's getting smaller and smaller as you go if the second tar took longer to transport than you want to freeze the originating system.

You could. But this is quite a bit of work when rsync does a better job itself.
 
I can't stand not seeing what's going on. rsync just says, "Building file list..." then done...then nothing ...

so I'm trying it now with --progress enabled and with -z as well (compression).

rsync -rlptvz --progress --delete -e ssh

So far its at least telling me what its doing. 8400 files in the list so far...

We have a boatload of attachments...
 
Back
Top