Koozali.org: home of the SME Server

Backup with a larger amount of data

Nico Blok

Backup with a larger amount of data
« on: January 10, 2003, 10:20:18 PM »
Hi there,

I saw this question before but nobody answered it, so i will ask it again:

I would like to backup my e-smith server to a W2K-share. The problem is when the file becomes bigger then 2GB. Is there a solution for this? Flexbackup tells me there is a broken pipe:

------------------------------------
flexbackup version 0.9.8 /etc/flexbackup.conf syntax OK

|------------------------------------------------
| Archiving to file; "mt setblk 0" skipped
| Doing level 0 backup of all using dump
| Archiving to file; "mt rewind" skipped
| Creating index key 200301092230.10
| Making sure tape is at end of data...
| Archiving to file; "mt eod" skipped
| Tape #0
| Filesystems = /
|------------------------------------------------
| Archiving to file; "mt tell" skipped
|------------------------------------------------
| File number 1, index key 200301092230.10
| Backup of: /
| Date of this level 0 backup: Thu Jan 09 22:30:10 2003
| Date of last level 0 backup: the epoch
|------------------------------------------------
| (dump -0 -b 10 -a -f - / | gzip -4) | buffer -m 3m -s 10k -t -p 75 -B
|-o \
|  /mnt/backup/root.0.20030109.dump.gz
|------------------------------------------------
  DUMP: Date of this level 0 dump: Thu Jan  9 22:30:11 2003
  DUMP: Dumping /dev/hda5 (/) to standard output
  DUMP: Added inode 7 to exclude list (resize inode)
  DUMP: Label: /
  DUMP: mapping (Pass I) [regular files]
  DUMP: mapping (Pass II) [directories]
  DUMP: estimated 5393542 tape blocks.
  DUMP: Volume 1 started with block 1 at: Thu Jan  9 22:36:29 2003
  DUMP: dumping (Pass III) [directories]
  DUMP: dumping (Pass IV) [regular files]
  DUMP: 2.39% done at 429 kB/s, finished in 3:24
  DUMP: 4.58% done at 412 kB/s, finished in 3:28
  DUMP: 6.91% done at 413 kB/s, finished in 3:22
  DUMP: 9.05% done at 406 kB/s, finished in 3:20
  DUMP: 11.41% done at 410 kB/s, finished in 3:14
  DUMP: 13.57% done at 406 kB/s, finished in 3:11
  DUMP: 15.96% done at 409 kB/s, finished in 3:04
  DUMP: 18.22% done at 409 kB/s, finished in 2:59
  DUMP: 20.19% done at 403 kB/s, finished in 2:57
  DUMP: 22.43% done at 403 kB/s, finished in 2:52
  DUMP: 24.66% done at 403 kB/s, finished in 2:48
  DUMP: 26.64% done at 399 kB/s, finished in 2:45
  DUMP: 28.79% done at 398 kB/s, finished in 2:40
  DUMP: 30.75% done at 394 kB/s, finished in 2:37
  DUMP: 33.06% done at 396 kB/s, finished in 2:31
  DUMP: 35.38% done at 397 kB/s, finished in 2:26
  DUMP: 37.53% done at 396 kB/s, finished in 2:21
  DUMP: 39.69% done at 396 kB/s, finished in 2:16
  DUMP: 41.58% done at 393 kB/s, finished in 2:13
  DUMP: 43.73% done at 393 kB/s, finished in 2:08
  DUMP: 45.85% done at 392 kB/s, finished in 2:04
  DUMP: 48.35% done at 395 kB/s, finished in 1:57
  DUMP: 50.30% done at 393 kB/s, finished in 1:53
  DUMP: 52.45% done at 392 kB/s, finished in 1:48
  DUMP: 54.49% done at 391 kB/s, finished in 1:44
  DUMP: 56.46% done at 390 kB/s, finished in 1:40
  DUMP: Broken pipe
  DUMP: The ENTIRE dump is aborted.
|------------------------------------------------
| Backup start: Thu Jan 09 22:30:10 2003
| Backup end:   Fri Jan 10 00:48:24 2003
|------------------------------------------------
| Archiving to file; "mt tell" skipped
|------------------------------------------------
| Rewinding...
| Archiving to file; "mt rewind" skipped
| Compressing log (all.0.20030109.gz)
| Linking all.latest.gz -> all.0.20030109.gz
|------------------------------------------------

File  Contents    (tape index 200301092230.10)
-----------------------------------------------
0  
1   level 0 / Thu Jan 09 22:30:10 2003 from eserverbm1 (root.0.20030109.dump.gz)
-----------------------------------------

guestHH

Re: Backup with a larger amount of data
« Reply #1 on: January 10, 2003, 10:29:06 PM »
Hoi Nico,

This has been discussed lately. As a result take a look at:

http://myezserver.com/downloads/mitel/beta/rsback/

Groet,
guestHH

Derek

Re: Backup with a larger amount of data
« Reply #2 on: January 11, 2003, 12:02:19 AM »