I guess you could take the view that if it isn't important just overwrite everything.
But if is is important you should have a back up anyways so another drive, just for that reason, makes sense. And then less risky if there is a problem (and you end up needing a complete re-install anyways.)
I have 0.6 TB of data,
And I just checked Amazon and first 1 TB usb drive is only $40 but 4th one down is only $20.
Preface, I am not an expert on ext4 at all. But in general, shrinking a file system is a bit riskier than expanding one, but in practice not much more risky than defragmenting. Still, as mentioned, if your data is important, it should be backed up. If it's not, then the next time you install Linux consider having a separate partition for your user files that will remain if even if you wipe the OS one for a reinstall.
No, but to recover that partition if it gets destroyed would need a backup/restore application that is self bootable. Hence the Google search I posted; you will need to check the various products to see if this feature is available.
I do not need to backup my data, but I like to have a backup of my OS.
First question you have to ask yourself is why?
If you're keen to be able to get your system going again after a crash, it can be done. But literally every last thing you do not in userland will need a new backup if you so much as sneeze. If it's a new system, the backup is useless (for the most part). If it's the same system, this is exactly what disk mirroring with RAID was created for. No reason to reinvent the wheel.
That being said, tar was made for this sorta thing. Just don't go backing up /dev and /proc. You can backup /tmp, but it's pointless.
But, you'd be much better of just creating a post-install shell script to recover from a crash with maybe a /etc tarball.
Installed a second NAS-class disk in my home server, replacing a motley assortment of consumer grade drives, one of which died recently. RAID resync is almost complete
Another useful tip:
I set up a cron job to do a "dpkg -l" and a "snap list" to a user-space file, which is then included in my daily user backups.
So if I need to rebuild or replace a machine, I don't have to remember all the packages I've downloaded over time.
A quick file comparison tells me what I missed. (Meld is my weapon of choice, btw.)
As well as user space all over, I also back up /etc and /var from the server.
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
When the user types in id; date, that's the value of the $cmd variable. So when you want to execute the command, its as if you had typed "id; date" at the command line, including the quote marks. So the bash interpreter is looking for a command named id; date. Its not treating this as "subscript" to be parsed and executed. For that you'll need to use eval
[k5054@localhost ~]$ cmd="id; date"[k5054@localhost ~]$ $cmd
-bash: id;: command not found
[k5054@localhost ~]$ eval $cmd
uid=1002(k5054) gid=1002(k5054) groups=1002(k5054)
Mon 17 Oct 202202:37:19 PM UTC
One of the screens executes a php that connects to the IP of another PC that
is on the same network as the PC with the web server.
However, this PHP fails because it doesn't have access to the other PC.
Both PCs have Debian 11 installed.
Seeing that in any of them to ping the other, it is necessary to do it with sudo, I thought that the problem may be in that it is necessary to add the web user (www-data) to the netdev group, like so I have executed the command:
$ sudo adduser www-data netdev
Adding user `www-data' to group `netdev' ...
Adding user www-data to group netdev
But the php is still unable to access the other PC.
Check the firewall settings on the destination PC. You may be blocking the port on the destination for incoming connections. There's an article that may help here: IBM Documentation
Member 15796760 wrote:
Seeing that in any of them to ping the other, it is necessary to do it with sudo
I'm not sure exactly what that means. You should be able to ping a (reachable) host without needing sudo. For example, assuming that you have your resolver and gateways correctly configured you should be able to
Since you're using http then that would be port 80. If you've installed a web server on PC2, then you may already have port 80 open. You can check to make sure that the port is open using nc
nc -z -w 1 IpPC2 80; echo $?
$? is the return value of the last command, so if it's 0 then the connection was successful. Anything other than 0, and the connection failed. Make sure you've got a web server running on PC2. You can do that using netstat