I run various NFS network shares and ‘mount’ them as readonly or read/write on clients running Ubuntu.

On a vanilla Ubuntu image the first step is to install the NFS client and tools to be able to mount a share.

sudo apt-get install rpcbind nfs-common

From there, mounting a share:

mount NFS-Server-IP:/path/to/share /folder/on/client

e.g.

mount 192.168.1.2:/volumes/shared_files /home/mounted_directory/

Note: You must create a blank folder on your client to mount the NFS share to. E.g. if using /home/backups then mkdir /home/backups before mounting the share.

I recently wanted to setup all my VMs to report to a remote syslog server, so that I can monitor the output and different levels of information.

The following help guide was just what I needed: Papertrail support article

Step 1:
Find out what type of logging daemon is running:

ls -d /etc/*syslog*

Step 2:
Add the following to the end of the respective .conf file:

*.* @your.server.ip

Step 3:
Kill the service – Ubuntu: sudo killall -HUP rsyslog rsyslogd and restart: sudo service rsyslog restart
Kill the service – Mac OSX: sudo killall -HUP syslog syslogd

Then to test messages are getting through to the remote syslog server:

logger "this is a test"

Worked a dream.

If you are specifying a specific port for your syslog server then put

*.* @your.server.ip:port_number

This caught me out. I changed /etc/networking/interfaces and set my IP addresses to static. Restarting networking. However, after about 24 hours the IP would change back to DHCP, despite the interfaces file reading a static setting.

I got round this by removing DHCP.

sudo apt-get remove isc-dhcp-client

I often want to run commands or files, e.g. python scripts, and keep them running after my terminal session is closed.

There’s a handy tool to do this called screen.

It’s quick to install.

apt-get install screen

Then, issuing the command ‘screen’ before your command line entry will run with the screen utility. There are a few things you need to know about using screen.

Ctrl a c – Creates a new screen session so that you can use more than one screen session at once.
Ctrl a n – Switches to the next screen session (if you use more than one).
Ctrl a p – Switches to the previous screen session (if you use more than one).
Ctrl a d – Detaches a screen session (without killing the processes in it)

For example.

screen path/to/pythonscript.py
Ctrl a d

The process is running in the background. You can check if it is running by looking at the running processes.

screen -ls

After the initial DHCP configuration is received I set my Virtual Machines to a static address.

To do this:

sudo nano /etc/network/interfaces

and change the following from:

auto eth0
iface eth0 inet dhcp

to (in this example, IP 192.168.1.10)

auto eth0
iface eth0 inet static
address 192.168.1.10
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.1

Then issue the commands:

ifdown eth0
ifup eth0

Running ‘ifconfig’ should then display the correct results for eth0.

I was running into a frustrating permissions issue when trying to setup FTP access for various users. They couldn’t follow a symlink to the /var/www/sitefolder for their respective vhost.

I worked around this by making their www sitefolder their HOME directory for their user account. As the user exists purely for the website then this was an effective workaround.

useradd -d /var/www/sitefolder username

Now when logging in via FTP I am greeted with the respective www files and can upload and download accordingly.

I’m finally getting round to monitoring what’s going on with various network devices. I have been meaning to setup an MRTG / SNMP setup for a while now. It’s actually very easy with the following steps from the Ubuntu wiki:

sudo apt-get install snmpd
sudo apt-get install mrtg

Now that mrtg is installed, we must create a home where web pages related to the program can reside. The Ubuntu / apache2 default is /var/www/.

sudo mkdir /var/www/mrtg

Backup the original /etc/mrtg.cfg file:

sudo cp /etc/mrtg.cfg /etc/mrtg.cfg.ORIGINAL

Create a configuration file for MRTG:

cfgmaker snmp_community_string@ip_address_of_device_to_be_monitored > /etc/mrtg.cfg

Create and index file for the webserver :

indexmaker /etc/mrtg.cfg > /var/www/mrtg/index.html

Wait about 5 minutes before browsing to:

http://server_ip/mrtg/index.html

I also repeated the above steps for each of my SNMP enabled devices, writing to a new config file each time.

I then made a new entry at .etc.cron.d/ for the mrtg process to run each 5 minutes for each new config file.

The stats can be seen here.

I often need to package and compress a folder from the command line, for example, before running updates so I have a fall back position.

The following works for me

tar -czf filename.tar.gz foldername

So when I had a folder named www and from the command prompt I could see it, and wanted to name the file www-040713.tar.gz it was

tar -czf www-040713.tar.gz www

c = create
z = compress
f = file

I was going round in circles trying to get WordPress permalinks to work on a Ubuntu server with Apache2 installed.

The .htaccess file was fine:

# BEGIN WordPress
#
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
#

# END WordPress

However I still got Error 404 page not found when I went to siteurl.com/page-name rather than siteurl.com/?p=123

In the end it turns out I needed to make a tweak to the Apache virtualhost.conf file for the site in question:

By default in the section it had

AllowOverride None

This needed to be changed to

AllowOverride All

A quick

service apache2 reload

And, hey presto, permalinks were working again.

I wanted to check that suPHP was working correctly and executing scripts under a specific user id.

This little PHP script did the trick.

php echo ‘whoim = ‘.exec(‘/usr/bin/whoami’);

That echo’d (outputted) the result of running a file /user/bin/whoami