Increasing the size of root (/) partition - please help

A

Adam

Host OS: Ubuntu 10.04 LTS
Guest OS: Windows XP Pro SP3



I am interested in increasing the size of the root (/) partition.
Here's a screenshot of the disk (via GParted) ...
http://imageshack.us/photo/my-images...agpartedb.png/

And, here's additional disk usage info ...

adam@ASUS-LAPTOP:~$ df -H
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.9G 7.6G 1.9G 81% /
none 4.2G 336k 4.2G 1% /dev
none 4.2G 2.3M 4.2G 1% /dev/shm
none 4.2G 287k 4.2G 1% /var/run
none 4.2G 0 4.2G 0% /var/lock
none 4.2G 0 4.2G 0% /lib/init/rw
/dev/sda6 475G 69G 383G 16% /home
adam@ASUS-LAPTOP:~$
adam@ASUS-LAPTOP:~$ cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid -o value -s UUID' to print the universally unique identifier
# for a device; this may be used with UUID= as a more robust way to name
# devices that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc nodev,noexec,nosuid 0 0
# / was on /dev/sda1 during installation
UUID=6f40ecfb-c0eb-4c2d-a9a6-6a6672db35a2 / ext3 errors=remount-ro 0 1
# /home was on /dev/sda6 during installation
UUID=2a875a2a-8088-4844-be48-337a2ee3200d /home ext3 defaults 0 2
# swap was on /dev/sda5 during installation
UUID=73889cf0-db54-453c-92cf-384014a018cb none swap sw 0 0
adam@ASUS-LAPTOP:~$

What's the best way to go about this task?
Will I risk losing data if I just decrease the size of /home partition from
the right end?
If so, what's the fastest way to copy all data from /home?
Copy to a USB external drive? Or, copy to another drive on the network?

Constructive posts will be much appreciated.
 
A

Adam

David Brown said:
You /always/ risk losing data when you mess with your partitions. Mind
you, if you don't have a decent raid system you also risk losing data just
by having it stored on a hard disk. So make sure you have a good, working
backup of all data of interest before you start playing - and make sure
that the backup is readable from other systems. A good choice would be a
full copy of /home (and maybe also /) to an external hard disk.

Once that's done, if gparted says it can shrink /home to move its start,
then it will almost certainly do so without data loss. But it will
probably take a /long/ time. Growing / to use the new space will be much
faster.

Use a boot cd or USB stick for this sort of thing, rather than trying to
do it live on the running system. My preferred tool is
<http://www.sysresccd.org/>.

Did I mention you should make backups first? Even if the software is
perfect, you might get a power-cut at an awkward moment.

Thanks, will definitely backup first ...
http://www.psychocats.net/ubuntu/backup
Can't do RAID on a laptop so I'll have to
hold off on RAID until my desktop build is done.
Slowly but surely.

Guru Fred (over at ubuntuforums.org) suggested to ...
"Another choice is just to shrink /home by 25GB, and create a new partition
for a new /.
Then you can install a clean new system with out a lot of moving of data.
You could still boot old root. You can export list of installed applications
if a lot and
reinstall so you have everything the way it was."

With the upcoming LTS release of Ubuntu,
I really like Guru Fred's 2nd root (/) partition approach.

Thanks (to all) for your help.
 
F

Frank Williams

F

Frank Williams

You /always/ risk losing data when you mess with your partitions. Mind
you, if you don't have a decent raid system you also risk losing data
just by having it stored on a hard disk. So make sure you have a good,
working backup of all data of interest before you start playing - and
make sure that the backup is readable from other systems. A good choice
would be a full copy of /home (and maybe also /) to an external hard disk.

Once that's done, if gparted says it can shrink /home to move its start,
then it will almost certainly do so without data loss. But it will
probably take a /long/ time. Growing / to use the new space will be
much faster.

Use a boot cd or USB stick for this sort of thing, rather than trying to
do it live on the running system. My preferred tool is
<http://www.sysresccd.org/>.

Did I mention you should make backups first? Even if the software is
perfect, you might get a power-cut at an awkward moment.


Total Crap use this program

http://www.partition-tool.com/personal.htm

And I never backed up a thing has worked many times.
 
M

Mike Tomlinson

Adam <adam@no_thanks.com> said:
What's the best way to go about this task?

Burn the GParted Live CD and boot from it.
Will I risk losing data if I just decrease the size of /home partition from
the right end?

Yes. Any messing about with partitions has the potential to be data
destructive. The risk isn't high, but it does exist.

If you care for your data, back it up before starting.
If so, what's the fastest way to copy all data from /home?
Copy to a USB external drive? Or, copy to another drive on the network?

USB external.

In your situation, I'd copy the contents of /home off onto an external
USB drive (/home is only 70GB used), boot the GParted live CD, delete
the /home partition, resize /, then re-create /home and copy the data
back. This will have the effect of defragmenting /home as well.

Use 'cp -Rvp' or 'cp -a' to preserve permissions and file attributes.

This will be quicker than getting GParted to resize /home as it will be
a lengthy operation to shuffle all the data down.
 
M

Mike Tomlinson

David Brown said:
One little hint, in case it's not obvious (or in case other people are
reading this thread) - if you are copying data to an external USB disk
like this, make sure the disk is formated as ext4 (or some other Linux
format).

A good tip, one I should have mentioned. Thanks.
 
A

Arno

In comp.sys.ibm.pc.hardware.storage Mike Tomlinson said:
En el art?culo <[email protected]>, David Brown
<[email protected]> escribi?:
A good tip, one I should have mentioned. Thanks.

And for Linux, you do not want to copy files, you do want to
create tar-archives and store them externally. Still want
to reformat ext2/3/4, as these can be larger than 4GB.

I would run the whole thing with gparted overnight though and
only use the external drive for backup. Defragmenting is not
necessary on ext2/3/4, and you will likely see no benefit
from copying the data back.

Arno
 
A

Arno

No, you want to copy the files.

I disagree.
Making a tar archive is useful for two purposes - one is to hold a bunch
of files together in a single unit, and the other is to make it easy to
compress the files. Neither of these applies here.

The third is that you can easily compare with the original (mandatory
in any reliable backup) and get checksums. Today, data amount is
large enough that bit-errors are a concern. Individual file copies
do not give you either.
When copying the files once, it would not make a noticeable difference
in time or space. But when copying a second time, to update the backup,
rsync is vastly more efficient than making a new tarball. And for
recovering files, a full copy is a far better choice than a tarball -
you can quickly and easily see exactly what you want, rather than having
to unpack everything (or figuring out the tar syntax to view listings of
files, then extract only some parts).

Are we talking about a temporary single backup copy or about a
general archiving strategy? I though it was the former.

As to the backup, rdiff-backup is far superiour here as it
does give you checksums and earlier generations.
I too would prefer to let gparted re-arrange the partition with the data
intact - that way there are two copies of the data. If I were going to
wipe the original partition, I'd make a second copy first.

That is what I was talking about.

Arno
 
A

Adam

David Brown said:
rsync is a very useful tool for backup, particularly because the result is
a direct copy of the files, and thus restore and recovery are simple.


This is Linux - /of course/ you can do raid on your laptop. We are not
limited to raid requiring identical expensive disks connected to special
hardware cards! You can "mirror" your laptop drive with a networked drive
such as an iSCSI export from a server (and the iSCSI export can be a file
or LVM partition instead of a real drive). You don't even have to have it
attached all the time - the mirror can re-synchronise when you connect the
laptop to the server.

Realistically, of course, you just want good backups for your laptop.

If you are going for raid on a desktop, I'd recommend two hard drives and
using the raid10,far layout. It gives you the safety of mirroring, and is
faster than raid0 for most uses (it's a little slower for writes, but
these are often cached - it's the reads that programs have to wait for).

Thanks (Guru DavidB), didn't know that I can do RAID for a laptop.
Will definitely want to "mirror" laptop drive on a regular (monthly/weekly)
basis.
This type of backup will make it easy to replace the laptop drive if it
fails.
Working on figuring out all the details ...

[Tough for me to trim gifts so I'll leave the trimming to others.]
 
A

Adam

Arno said:
In comp.sys.ibm.pc.hardware.storage David Brown



I disagree.


The third is that you can easily compare with the original (mandatory
in any reliable backup) and get checksums. Today, data amount is
large enough that bit-errors are a concern. Individual file copies
do not give you either.


Are we talking about a temporary single backup copy or about a
general archiving strategy? I though it was the former.

As to the backup, rdiff-backup is far superiour here as it
does give you checksums and earlier generations.



That is what I was talking about.

Arno
--
Arno Wagner, Dr. sc. techn., Dipl. Inform., CISSP -- Email:
(e-mail address removed)
GnuPG: ID: 1E25338F FP: 0C30 5782 9D93 F785 E79C 0296 797F 6B50 1E25
338F

Thanks (Guru Arno & to all), I do see good usage for "tar" in my situation
(like
the ~60 GB .vdi file which will benefit greatly from compression).
Will definitely want to "tar" huge .vdi files and /etc on a more regular
(weekly/daily) basis.
So, I am planning to use a combo of "mirror" and "tar".
 
A

Adam

David Brown said:
No, use rsync to do the copies. Your vdi files are big - but only parts
of them change. When you make new backups, rsync can copy over only the
parts that change.

Tar is only useful if you need to package your backups into a single file.

Okay, will definitely keep this in mind. Thanks!
 
C

ceed

Thanks (Guru Arno& to all), I do see good usage for "tar" in my
situation
(like
the ~60 GB .vdi file which will benefit greatly from compression).
Will definitely want to "tar" huge .vdi files and /etc on a more regular
(weekly/daily) basis.
So, I am planning to use a combo of "mirror" and "tar".
No, use rsync to do the copies. Your vdi files are big - but only
parts of them change. When you make new backups, rsync can copy over
only the parts that change.
Tar is only useful if you need to package your backups into a single
file.

I'm no specialist in this area by any means, but I've learned how to use
rsync. Incredibly useful little but powerful program that can take care of
most backup needs. I also wanted to mention rsync-backuop. Using this
little perl script has been of great help to me:

<http://devpit.org/rsync-backup/>
 
A

Arno

That's your right - but please be careful about giving advice when you
haven't researched things fully.

I have. What makes you think I may not have?
I have been using tar regularly for about 20 years and rsync
for about 10.
It is certainly true that you may want to double-check your copies
(though in reality, bit-errors are seldom a realistic concern - human
error typically far outweighs the risks of data loss through bit errors).

Again, I disagree. True, in a healthy system bit-errors are
exceedingly rare, but I have had them several times with
defective hardware that did not cause any other obvious problems.
One was a slowly dying chipset (inadequate cooling solution
misdedigned by Asus). One was a weak bit in a RAM module. One
was wrong RAM parameters by the BIOS (again Asus). There are
more.
So do your rsync once, then you can run it again with the "--checksum"
option. This will do a full check, and re-transfer files that do not
match.

But will it give you a warning, or just quietly fix (or break) the
backup file? From what I see, it is the second and this option is
not really suitable for a verify. It may or may not be possible to
misude it as such though.
If you are copying over a network, it has the advantage that
only checksums are transferred (unless the files do not match, of course).
Or you can compare your source and copy directory trees with "diff -r
-q", which is not really much harder than comparing two tarballs with
"diff -q".

Aehm, you compare the tarball with "tar d(v)f <tarfile>". Comparing
two tarballs is not something you need to do during backup. For a
manual compare of file-tres, you can also create an MD5 sums file with
find . -type f -exec md5sum \{\} \; > md5
and a simple
md5sum -c md5 on the second tree.
The big difference, of course, is that if there is a
mismatch, you only need to re-copy the bad files.

Which is rare enough not to be an efficiency concern at all.
If your tarballs
don't match, you only know that there is a mistake somewhere and you
have to re-copy everything.

No, actually you know that you have a hardware issue. That you
should find and fix next.

And tar will certainly display which files failed the compare.
You can then extranct them from teh archive to get a closer look
at the error, e.g. with
cmp -b -l said:
Or you can generate and check md5 sums recursively, using a find command:
find . -type f -print0 | xargs -0 md5sum > checksums.md5
md5sum -c checksums.md5
So if you are paranoid about bit errors, there are three ways to check
them that are better than comparing tarballs.

There is no need to compare tarballs, as explained above. And
doing a full verify as part of a backup is not paranoid,
just professional. There are numerous instances were people
without that full comparison against the original files needed
their backups and fiund them to be bad. And there are quite
real probabilities of hardware issues with RAM, busses,
controllers, etc. that cause bit-errors.
Rsync is better for both - though it is /far/ better if you are doing
multiple backups.

And again, I for the temporary copy I disagree.
I don't think you really understand rdiff-backup if you think it is
better in general.

I am quite sure I understand it. And I said it was better for
backups. I have been using it for automated backups for some
years now.
There are situations where it is useful - if you
have lots of big files, which only change a little, and you don't have
much space on your target drive, and you don't care about compatibility
between versions of the client and server, and don't care about slow and
awkward restores, and don't want to do any independent compares or
checksums, then rdiff-backup might be worth considering.

The only compatibility Issue I ever found with rdiff-backup was
due to a Debian maintainer adding a version to the distro that
was explicitely marked as incompatible.
rsync can do checksumming itself, and is very efficient at working with
multiple generations of backups - only copying things that have changed,
and keeping multiple generation snapshots using hardlinks to minimise
space.

But what does it do for changed files? Hardlinks do not work
there. Forward-diffs? Or full new instances?
If you don't fancy learning the rsync command line and writing
scripts to use it, there are plenty of ready-made solutions that build
on rsync (rsnapshot and dirvish are two good choices).

I have looked at rsync. I think the one thing it does well is
actual synchronization of directory trees. I am using it for that
with some really large trees. For incremental backups,
it is not the right tool IMO. You may differ, of course.

Incidentally, on file-level, tar also supports incremental
backups, although the mechnism is obscure and I do not
recommend it.
At least we agree on something - the importance of multiple copies.

Indeed. Redundancty os the only thing enabling data to reliably
survive.

Arno
 
A

Adam

Arno said:
In comp.sys.ibm.pc.hardware.storage David Brown



I have. What makes you think I may not have?
I have been using tar regularly for about 20 years and rsync
for about 10.



Again, I disagree. True, in a healthy system bit-errors are
exceedingly rare, but I have had them several times with
defective hardware that did not cause any other obvious problems.
One was a slowly dying chipset (inadequate cooling solution
misdedigned by Asus). One was a weak bit in a RAM module. One
was wrong RAM parameters by the BIOS (again Asus). There are
more.


But will it give you a warning, or just quietly fix (or break) the
backup file? From what I see, it is the second and this option is
not really suitable for a verify. It may or may not be possible to
misude it as such though.



Aehm, you compare the tarball with "tar d(v)f <tarfile>". Comparing
two tarballs is not something you need to do during backup. For a
manual compare of file-tres, you can also create an MD5 sums file with
find . -type f -exec md5sum \{\} \; > md5
and a simple
md5sum -c md5 on the second tree.


Which is rare enough not to be an efficiency concern at all.


No, actually you know that you have a hardware issue. That you
should find and fix next.

And tar will certainly display which files failed the compare.
You can then extranct them from teh archive to get a closer look
at the error, e.g. with




There is no need to compare tarballs, as explained above. And
doing a full verify as part of a backup is not paranoid,
just professional. There are numerous instances were people
without that full comparison against the original files needed
their backups and fiund them to be bad. And there are quite
real probabilities of hardware issues with RAM, busses,
controllers, etc. that cause bit-errors.



And again, I for the temporary copy I disagree.



I am quite sure I understand it. And I said it was better for
backups. I have been using it for automated backups for some
years now.


The only compatibility Issue I ever found with rdiff-backup was
due to a Debian maintainer adding a version to the distro that
was explicitely marked as incompatible.


But what does it do for changed files? Hardlinks do not work
there. Forward-diffs? Or full new instances?


I have looked at rsync. I think the one thing it does well is
actual synchronization of directory trees. I am using it for that
with some really large trees. For incremental backups,
it is not the right tool IMO. You may differ, of course.

Incidentally, on file-level, tar also supports incremental
backups, although the mechnism is obscure and I do not
recommend it.



Indeed. Redundancty os the only thing enabling data to reliably
survive.

Arno
--
Arno Wagner, Dr. sc. techn., Dipl. Inform., CISSP -- Email:
(e-mail address removed)
GnuPG: ID: 1E25338F FP: 0C30 5782 9D93 F785 E79C 0296 797F 6B50 1E25
338F

Okay, will definitely keep this in mind also. Thanks!
 
C

Christoph Schmees

Am 18.04.2012 23:43, schrieb Arno:
...

Again, I disagree. True, in a healthy system bit-errors are
exceedingly rare, but I have had them several times with
defective hardware that did not cause any other obvious problems.
One was a slowly dying chipset (inadequate cooling solution
misdedigned by Asus). One was a weak bit in a RAM module. One
was wrong RAM parameters by the BIOS (again Asus). There are
more.
...

+1
I just recently had two systems here with HW problems, and both
had corrupt data. And I have seen a weak NAS which occasionally
flips bits during transport. Windows clients just store and
forget and thus sometimes deliver false data whereas Linux stalls
in accessing the NAS when it is in the mood of malfunctioning.
Apparently Linux does some sort of check on the smb://
communication and data transfer.

Christoph
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top