Koozali.org: home of the SME Server

Incremental backup mysql using Affa and bin log

Offline sopper99

  • 4
  • +0/-0
Incremental backup mysql using Affa and bin log
« on: November 17, 2010, 01:02:19 PM »
I am using Affa backup since it's existence and i am very pleased with it.

Recently i installed the Zarafa package: http://wiki.contribs.org/Zarafa
No hard sweat here either.
I moved over my entire imap box to Zarafa wich in return stores it in mysql.
When doing the nightly affa backup job, it dumps the entire mysql in /home/e-smith/db/mysql
and transfers it to my affa box.
Although it works, its using huge amount of diskspace on my affa box every single backup run.
The beautiful functioning affa with incremental (hardlinks) backup wich makes good use of drivespace is letting me down a little.
Sure my sqldump is about 2 GB and is transferable to my affabox over a internet-link but its wasting bandwith and space.
I managed to enable bin-log in my.cnf  You get smaller log files to transfer but a mysqldump used by signal-event pre-backup does dump the entire sql.
Is it safe to disable / alter the pre-backup to just copy the bin-logs to /home/e-smith/db/mysql and let affa can copy it instead of the full dump?
Once a week i can make a full backup (dump) with the original pre-backup or
mysqldump --single-transaction --flush-logs --master-data=2 \
         --all-databases > fullbackup.sql
I must read deeper in the use of the syntax of mysql as i am not very familiar with mysql and mysqldump.
If someone has a suggestion to share with me, he of she is welcome!



Offline cactus

  • *
  • 4,880
  • +3/-0
    • http://www.snetram.nl
Re: Incremental backup mysql using Affa and bin log
« Reply #1 on: November 17, 2010, 06:52:05 PM »
Is it safe to disable / alter the pre-backup to just copy the bin-logs to /home/e-smith/db/mysql and let affa can copy it instead of the full dump?
No, as you would need to know the exact starting point of the first binlog as they are relative to the database state at a certain point. To do a restore from binary logs is rather complex. Apart from that using this method it might be that there is still data in memory that has not been written to the tables and to the binary log. If that is the case you will have a inconsistent backup as you will loose data on restore.

Apart from that all queries run on the server are stored in the binary log, so select operations are also stored. This is overhead which is not needed for a restore.

Once a week i can make a full backup (dump) with the original pre-backup or
mysqldump --single-transaction --flush-logs --master-data=2 \
         --all-databases > fullbackup.sql
I must read deeper in the use of the syntax of mysql as i am not very familiar with mysql and mysqldump.
If someone has a suggestion to share with me, he of she is welcome!
Why can you only make a dump once a week? A dump should not take to long and should be not to invasive on your server if you configure your locking properly. Can you tell us why you can only do a dump once a week, perhaps we can help you improve your backup routines.

I must read deeper in the use of the syntax of mysql as i am not very familiar with mysql and mysqldump.
If someone has a suggestion to share with me, he of she is welcome!
Since you are stating that I wonder why you choose this rather advanced technique, which IMHO is not for the MySQL non-illiterate. mysqldump has been written for doing dumps and backups so I wonder why you are not using that. Perhaps you can elaborate on that.
Be careful whose advice you buy, but be patient with those who supply it. Advice is a form of nostalgia, dispensing it is a way of fishing the past from the disposal, wiping it off, painting over the ugly parts and recycling it for more than its worth ~ Baz Luhrmann - Everybody's Free (To Wear Sunscreen)

Offline sopper99

  • 4
  • +0/-0
Re: Incremental backup mysql using Affa and bin log
« Reply #2 on: November 18, 2010, 09:19:23 AM »
Thanks for the reply.

I did not know that the bin-log also stores all the queries run by the server.
The only reason to come up with this is that i read on the MySQL site that is was the way to make incremental backups.
If mysqldump had a way to give a starting point at wich to make the dump i would be pleased.
One could make a dump every day as my affabackup does now by pre-backup.
Every dump holds every change from the last dump. Say dump.001 dump.002 ....
If disaster strikes running a restore would run pre-restore and there's were you would issue a command to collect all the dumps as one (merge).
Something is not possible at the moment?
The exact way i wanted to go with the bin-log.
Once a week:
Make mysql merge all the bin-logs. Issue a flush-log. A new log is made. Then let affa backup the /var/lib/mysql directory without the last bin-log.
Every day:
Issue a flush-log. A new log is made. Then let affa backup the /var/lib/mysql directory.
As affa uses rsync with hard-links only the modified / latest bin-logs are really transfered.
When disaster strikes:
Restore the /var/lib/mysql directory. One could issue a command to merge the bin-logs.
Sure what is in memory at that time is gone. But with the standard affa run once a day it's the same.

Sure the standard pre-backup with dump works great and is the easiest to restore.
My method is perhaps making it unwisely complex.

When switching to Zarafa it came to me that every day the entire dump would be copied using affa. That makes sense of course. It's of course not so great that it has to transfer (in my case over my ipsec vpn to my own affabox at home) all of the data over again.
A mailbox has for one given user perhaps 1 to 2 GB of data. With a few users it's already Gigabytes of data to transfer every night. Sure the attachments are stored as files under Zarafa (the Zarafacontrib is already configured that way) so that's handled by rsync.

If the bin-log way is unsafe and too complex i will abandon my thoughts about that.
I just thought it was a nice way to make use of rsync and a way to not have to transfer all the data every day.  :-)

Offline cactus

  • *
  • 4,880
  • +3/-0
    • http://www.snetram.nl
Re: Incremental backup mysql using Affa and bin log
« Reply #3 on: November 18, 2010, 09:35:00 AM »
Then let affa backup the /var/lib/mysql directory without the last bin-log.
Very bad practice as this will not mke sure all data is on the disk, as well as tables being in an accessible state. You should at least stop the mysqld service for that.

When disaster strikes:
Restore the /var/lib/mysql directory. One could issue a command to merge the bin-logs.
Did you ever test that? AFAIK there is no way to merge binary log files.

Sure what is in memory at that time is gone. But with the standard affa run once a day it's the same.
Which proofs that your backup strategy is poor as backups are meant to recover all data, they should not loose data. Loosing data can cause pretty strange and hard to troubleshoot problems in your application determining on the content of the database as the integrity of the data might be lost.

Sure the standard pre-backup with dump works great and is the easiest to restore.
My method is perhaps making it unwisely complex.
Then why not make more dumps, say in a daily fashion?

When switching to Zarafa it came to me that every day the entire dump would be copied using affa. That makes sense of course. It's of course not so great that it has to transfer (in my case over my ipsec vpn to my own affabox at home) all of the data over again.
You can easily compress the dump files which will yield a very good compression ratio as the nature of the content (lots of repetition) makes it very suitable for compression.

A mailbox has for one given user perhaps 1 to 2 GB of data. With a few users it's already Gigabytes of data to transfer every night. Sure the attachments are stored as files under Zarafa (the Zarafacontrib is already configured that way) so that's handled by rsync.
Perhaps you are better of rethinking your strategy and consider replication, this might be more suitable in your case.

If the bin-log way is unsafe and too complex i will abandon my thoughts about that.
I just thought it was a nice way to make use of rsync and a way to not have to transfer all the data every day.  :-)
If it suits you needs keep them, but I think it is a bad decision. Binary logs are there for the sake of replication, not for using as a backup strategy.

Part of a good backup routine is to also run the restore option and see if it works flawlessly (on a regular basis, especially when it is your own routines). The hard part in your backup strategy is knowing at what point in time to start restoring your binary log data.
Be careful whose advice you buy, but be patient with those who supply it. Advice is a form of nostalgia, dispensing it is a way of fishing the past from the disposal, wiping it off, painting over the ugly parts and recycling it for more than its worth ~ Baz Luhrmann - Everybody's Free (To Wear Sunscreen)

Offline sopper99

  • 4
  • +0/-0
Re: Incremental backup mysql using Affa and bin log
« Reply #4 on: November 18, 2010, 10:13:04 AM »
Quote
If it suits you needs keep them, but I think it is a bad decision. Binary logs are there for the sake of replication, not for using as a backup strategy.

Part of a good backup routine is to also run the restore option and see if it works flawlessly (on a regular basis, especially when it is your own routines). The hard part in your backup strategy is knowing at what point in time to start restoring your binary log data.
That's were i do'nt understand things. I thought by completely backing up the mysql data dir with binary logs a exact copy is made. Restoring it would bring it to a pre state without the last bin-log though. The copy now is made is actually also once a day (nightly). Replication is indeed a possibility.

Is it no way possible to just dump the changes made in the mysql database every day?
Just that one does not need to copy the entire database (dump) every time?

I came up with this because i read : http://www.sitemasters.be/tutorials/2/1/560/MySQL/Backup_en_recovery_MySQL There is they say it is a good strategy.

So now and then i have to restore a file. It's easy, just winscp to my affabox and copy the file.
With now Zarafa its not that easy anymore. It's using mysql. You can not simply copy a file back (mail ).

As i am using sme-server for a while and i have read many posts from you with good advice. I will take your advice and abandon the bin-log way!


Thanks for the advice.


Offline cactus

  • *
  • 4,880
  • +3/-0
    • http://www.snetram.nl
Re: Incremental backup mysql using Affa and bin log
« Reply #5 on: November 18, 2010, 11:19:05 AM »
That's were i do'nt understand things. I thought by completely backing up the mysql data dir with binary logs a exact copy is made. Restoring it would bring it to a pre state without the last bin-log though.
Only when you make sure that all data is flushed to disk before creating a new binlog.

Is it no way possible to just dump the changes made in the mysql database every day?
No, not without modifying the scheme of every table by adding a timestamp which the record was last modified, which is also a hassle.

Just that one does not need to copy the entire database (dump) every time?
Yes, that is how it works.

I came up with this because i read : http://www.sitemasters.be/tutorials/2/1/560/MySQL/Backup_en_recovery_MySQL There is they say it is a good strategy.
It does, but it forgets to tell you what to take care of when doing so and what the possible risks are. It also assumes you are using InnoDB, which is disabled by default on SME Server.

As i am using sme-server for a while and i have read many posts from you with good advice. I will take your advice and abandon the bin-log way!
I am not saying you should do so, but make sure you capture the state of the database in a way that you have all data available.
If you are willing to keep making incremental bugs it can be done, be sure to:
  • flush all data to disk
  • stop mysqld
  • copy all the files
  • start mysqld
  • Make a dump like you used to do and flush you binlog

By flushing all data to disk and stopping the MySQL daemon you make sure all tables are closed and are consistent when copying away the files. This will prevent data loss.
Thanks for the advice.
You're welcome.
« Last Edit: November 18, 2010, 11:21:08 AM by cactus »
Be careful whose advice you buy, but be patient with those who supply it. Advice is a form of nostalgia, dispensing it is a way of fishing the past from the disposal, wiping it off, painting over the ugly parts and recycling it for more than its worth ~ Baz Luhrmann - Everybody's Free (To Wear Sunscreen)

Offline janet

  • *****
  • 4,812
  • +0/-0
Re: Incremental backup mysql using Affa and bin log
« Reply #6 on: November 18, 2010, 11:19:24 AM »
sopper99

Perhaps you should/could abandon Zarafa too, as the way it stores data in mysql dbs is the real source of your problem.
Please search before asking, an answer may already exist.
The Search & other links to useful information are at top of Forum.

Offline sopper99

  • 4
  • +0/-0
Re: Incremental backup mysql using Affa and bin log
« Reply #7 on: November 18, 2010, 02:09:29 PM »
mary you are right, i will consider abandon zarafa at least for the mail.
I will continue to use the sme-server imap server. Perhaps just the zarafa shared calendar part.
If i will use zarafa in future i will setup a replication slave for mysql wich is easier to setup as what i wanted to do.
Thank you.

Offline CharlieBrady

  • *
  • 6,918
  • +3/-0
Re: Incremental backup mysql using Affa and bin log
« Reply #8 on: November 19, 2010, 08:33:46 PM »
I am using Affa backup since it's existence and i am very pleased with it.

Recently i installed the Zarafa package: http://wiki.contribs.org/Zarafa

You are already off-topic for this forum, which concerns only the software installed from the SME server CDROM. Please take the time to post in the correct forum. Thanks.

Offline cactus

  • *
  • 4,880
  • +3/-0
    • http://www.snetram.nl
Re: Incremental backup mysql using Affa and bin log
« Reply #9 on: November 20, 2010, 10:45:49 AM »
Moving to SME 7.x Contribs where it is more appropriate.
Be careful whose advice you buy, but be patient with those who supply it. Advice is a form of nostalgia, dispensing it is a way of fishing the past from the disposal, wiping it off, painting over the ugly parts and recycling it for more than its worth ~ Baz Luhrmann - Everybody's Free (To Wear Sunscreen)