Now that we have all of the basic concepts down, we take a look at doing some of the most common tasks in UNIX.
The most common task you will perform under UNIX will almost certainly be text editing. When you need a file that contains commands for a program, data, or practically anything else, you'll need to know how to edit a text file.
A text file is just a plain file that contains text. It does not typically contain any formatting commands, other than tabs and newlines. A text file can be easily viewed using more or cat.
There are many different text editors available. Most UNIX users develop a rabid attachment to their favorite text editor. My personal favorite is pico, a very easy-to-use text editor. vi and emacs are other very common editors.
To run pico:
Your terminal should look like this:
pico is based on the pine email reader, which is used on the Shakespeare systems. If you can use pine, pico should give you no trouble.
Electronic Mail originated with UNIX, which may explain a lot about how easy it is to use.
As a user of the Steel cluster, you may receive email there. You can check your email on steel by using pine, a mail program. pine is the same program used on the Shakespeare email servers.
Moving Files between Machines
Occasionally, you will need to move files to or from the UNIX machine.
If your files are currently on a Windows or Macintosh machine, you should use Hummingbird (Windows) or Fetch (Macintosh). Follow these instructions to move files to or from a UNIX machine:
If both machines are UITS UNIX machines (such as the SP, or Steel), you can simply copy the files:
This would copy the file mystuff.txt from my Steel account to my SP account.
If you are wanting to move files between any two UNIX machines, you should use the ftp UNIX command. This is the same program as Fetch or Hummingbird, except that it is command-line based.
UNIX users are encouraged to write their own programs. The simplest kind of program is called a shell script. A shell script is simply a text file containing one UNIX command per line.
The first thing to do is to create a place for all of your scripts.
steel /N/fs1/clwolfe/Steel $ cd bin
The standard location for user's scripts is in the bin directory, in your home directory. This pathname should already be in your $PATH environment variable by default. This means that no matter where you are, you will be able to execute your own scripts as if they were UNIX commands.
The next step is to create a text file with the commands would like to execute. Use any text editor (e.g., pico) to do this.
For example, you could write a script that prints out the words "Hello World!", by creating a text file with the following two lines:
#/bin/sh echo "Hello World!"
Save this file in bin with the filename helloworld.
Next, make the file executable by changing its permissions:
steel /N/fs1/clwolfe/Steel/bin $ ls -l helloworld
-rwx------ 1 clwolfe iustaff 32 Feb 23 10:47 helloworld
As you can see, the file is now executable. Execute your script:
Of course, you can do a lot more than print out a simple message. Shell programming is very powerful. Try some of these examples:
Modifying Your Environment
It is often neccesary to modify your environment in UNIX. This means that you would like to make a permanent change to the configuration of your shell - for example, to add another path to the $PATH variable. Or perhaps you need to run an application that requires an environment variable to be set (GAUSS, for example).
First, let's find out where shell configuraion is done.
-rw------- 1 clwolfe students 1428 Oct 28 08:33 .Xauthority -rw------- 1 clwolfe students 4324 Feb 22 19:58 .bash_history -rw------- 1 clwolfe students 3027 Mar 10 1999 .cshrc -rw------- 1 clwolfe students 473 May 5 1999 .history -rw------- 1 clwolfe students 1808 Mar 10 1999 .login -rw------- 1 clwolfe iustaff 10602 Feb 23 10:05 .pinerc -rw------- 1 clwolfe students 4228 May 5 1999 .profile -rw------- 1 clwolfe students 12 Mar 10 1999 .sh_history -rw-r----- 1 clwolfe students 80 May 17 1999 .signature -rw------- 1 clwolfe students 207 May 11 1999 .spssrc -rw------- 1 clwolfe students 1091 May 11 1999 .xmaplev5rc drwx------ 2 clwolfe students 1024 Feb 22 11:56 Mail/ drwx------ 2 clwolfe iustaff 96 Feb 23 10:47 bin/ drwx------ 2 clwolfe students 96 Feb 22 11:14 stuff/ drwx--x--x 8 clwolfe students 1024 Feb 10 14:05 www/
Notice all of the files that start with a "." . These are configuration files for various programs. Each file is a plain text file that you could edit with pico. The various shells use different files for configuration.
Your shell executes these files as if they were shell scripts when you log in. This sets up your environment variables and configures any shell features.
You may edit the files like any other shell script, using pico.
Submitting Batch Jobs
Batch processing is usually done on the Research SP, rather than on Steel. The Research SP is designed for batch processing.
Recall from our discussion of processes that there are foreground and background processes. A batch job is a background process that is run at a later time, when it is conveneient for the system. Batch jobs are usually used for any CPU-intensi ve task (i.e., any task that requires a large amount of number-crunching). Typically, an application package is used (such as Maple, SPSS, Matlab, SAS, etc.). Submitting a batch process is a three step process:
- Write and debug the scripts you will use with your application. This might mean writing a syntax file for SPSS, an m-file for Matlab, etc.
- Write a batch script - a script that the batch scheduler will use to execute your job.
- Submit the batch script to the batch scheduler.
Step 1 is entirely dependent on what application you are using. Presumably, you know what you are doing for this step.
Step 2 is straightforward. The batch scheduler on the Research SP is LoadLeveler, another fine product from IBM. All you need to do is create a text file that looks like this:
#@ class=m #@ group=standard #@ requirements=(Feature=="maple") #@ initialdir=/N/fs1/clwolfe/SP #@ output=research.out #@ error=research.err #@ queue maple -f -q <research
This is a script to run a Maple job. The first line declares that it is a job of class "m" - i.e., a math job. The third line indicates that Maple must be installed on the node that it will be run on. The fourth line sets the working directory for t he job. The fifth and six lines direct STDOUT and STDERR to two files (since batch processing is non-interactive, you will lose this information if you don't save it). The seventh line tells LoadLeveler to queue the job for later processing.
Pay special attention to the last line. This is simply a command, as you would write at the command prompt, that executes Maple and feeds in the file named research, where I have (presumably) stored the Maple commands to be executed.
There are sample scripts available on the NFS server (i.e., they are available from any UITS UNIX machine, such as Steel, or the Research SP) in the directory /N/u/statmath/SP/scripts. You'll find a script there for every major application.
Save your batch script in your home directory, and give it a name you will remember.
The third step is to submit the batch job to the batch scheduler. If the batch script was named myjob, you would use this command:
To see a listing of the jobs that are currently in the queue, use
Next: Further Reading
Up: Table of Contents