The following is a list of useful tools that you may find helpful while navigating your CS major/minor
If you have any tools you believe belong on this list, please email Ruben Gilbert
- 1 Notes on Style
- 2 Application Managers
- 3 File System Resources
- 4 HTCondor
- 4.1 What is Condor?
- 4.2 Types of Machines
- 4.3 How to Submit a Job
- 4.4 Preparing Your Environment
- 4.5 Writing a Submission File
- 4.6 Monitoring Your Submission(s)
- 4.7 How to Remove a Job
- 4.8 FAQ
- 5 Shell Commands/Tools
- 6 TTY Environments
Notes on Style
In order to easily read this guide, you should be aware of the notation being used. A statement may read:
$ [user@my-machine ~]: command <variable> some/kind/of/path
The $ is simply denoting that this line is from a console window. In other words, the text following a $ should be read as though it were in a Terminal window.
The [ ] enclose a typical terminal prompt, which usually contains the name of the user @ the machine they are currently logged into.
The ~ is shorthand for home folder (and is generally the default location a console opens to). If you changed to a different directory, that ~ would become the bottom-level folder of your current path (e.g. if you are in /home/<username>/Documents/cs101, you would just see cs101 -- NOTE: You may change these settings by editing your
The : will be followed by the command being referenced.
Any variable value that changes based on the user will be enclosed in < >. (e.g. when a command requires a <username>, we don't actually enter "<username>", but instead your username).
A command that requires some/kind/of/path will need you to specify a path for the command to run to/from. Generally speaking, the path can be relative or absolute
Miniconda is a stand-alone release of the popular application package manager conda. Its older brother is Anaconda -- the conda package manager bundled with 150+ packages.
Miniconda is useful for two major reasons:
- Installing packages can be a pain. Depending on how your Python environment is structured, and which packages depend on other packages, installing Python modules manually can often cause conflicts or unforeseen problems. A package manager keeps track of dependencies and installs everything for you.
- Environments. Environments allow you segment different sets of packages from one another to reduce conflicts.
- Example: Say you often program with packages X and Y, both of which require package Z to function properly. But, what happens if package X requires version 1.0 of package Z, while package Y requires version 2.0 of package Z. An application package manager allows you to create one environment with the correct versions of X and Z, and a completely separate environment with the correct versions of Y and Z. Depending on what you are working on, you can switch between the environments, as needed. Further, say you use package Q in every project. You can install package Q a level above your environments so it is always available.
Use for CS at Midd
Without a package manager like Miniconda, any time you want to do any kind of work with Python on the lab machines and a package you need is not installed, you have to ask the admins for it. In turn, the admins have to prepare an image with the specific Python package you need, plan a scheduled downtime for the lab, and reimage the lab to include the new package. This could potentially take a few days. If you then determine you need another package, the cycle repeats itself.
Miniconda allows you to create your own, self-contained Python environments owned by you, which are usable on all lab machines at any time. This is especially useful if you plan to use the HTCondor system, as you can guarantee that all the packages you need will always be available for condor to use.
You can get the installer you need from this page. The Windows version comes in an executable that you just need to run. The Mac version comes as a bash script, which will be similar to the Linux instructions below. If you need assistance getting Miniconda on your personal machine, talk with Ruben.
For the Linux lab machines, do the following:
- Download the "64-bit (bash installer)". From a lab machine, this can be achieved by opening a terminal and running:
$ [user@lab-machine ~] wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
- Run the installer. Assuming you are in the same directory as where you just downloaded the bash script above, all you need to do is run the command:
$ [user@lab-machine ~] bash Miniconda3-latest-Linux-x86_64.sh
- -This should result in the following output:
Welcome to Miniconda3 <version> (by Continuum Analytics, Inc.) In order to continue the installation process, please review the license agreement. Please, press ENTER to continue
- -Press ENTER
- -The Terms and Conditions will pop up. Press SPACE until you are met with the following prompt:
Do you approve the license terms? [yes|no]
- -Type "yes"
Miniconda3 will now be installed into this location: /home/<username>/miniconda3 - Press ENTER to confirm the location - Press CTRL-C to abort the installation - Or specify a different location below [/home/<username>/miniconda3] >>>
- -Press ENTER
- The Miniconda installer will then install a bunch of stuff. When you are met with the following prompt:
Do you wish the installer to prepend the Miniconda3 install location to PATH in your /home/<username>/.bashrc ? [yes|no] [no] >>>
- -Type "yes"
You're all set! Now you should be able to create environments and install packages and have them available on whichever lab machine (including basin) that you are using.
Making an Environment
To view your environments, use the command:
$ [user@machine ~] conda info --envs
If you have just installed Miniconda, you will receive output something like this:
# conda environments: # root * /home/<username>/miniconda3
This is telling you that you are currently using the highest level (root) Python environment. If you followed the installation instructions above, that environment should be
To make a new environment, we invoke the
$ [user@machine ~] conda create -n <env_name> -y python=<python_version> <list_of_packages_to_install>
$ [user@machine ~] conda create -n myEnv -y python=3.6 scipy numpy pytorch pillow pip matplotlib
If you receive this message then your environment has been created:
# # To activate this environment, use: # > source activate myEnv # # To deactivate this environment, use: # > source deactivate myEnv #
Deleting an Environment
To delete an environment, make sure it is first deactivated:
$ [user@machine ~] source deactivate <name_of_environment>
Then, run the
$ [user@machine ~] conda remove -n <name_of_environment> --all
Conda will list to you all of the packages it will remove, accept this at the prompt:
$ Proceed ([y]/n)? y
When you first login to a console, you will start off in your root environment (indicated by the asterisk).
$ [user@machine ~] conda info --envs # conda environments: # myEnv /home/<username>/miniconda3/envs/myEnv root * /home/<username>/miniconda3
To change this, we use the
$ [user@machine ~] source activate myEnv $ (myEnv) [user@machine ~]
The active environment will be listed in parentheses before your usual prompt. To verify this:
$ [user@machine ~] conda info --envs # conda environments: # myEnv * /home/<username>/miniconda3/envs/myEnv root /home/<username>/miniconda
This means that invoking python (or python3) will use the packages from the myEnv environment.
To deactivate the current environment:
$ (myEnv) [user@machine ~] source deactivate $ [user@machine ~]
Pip Within Conda
pip is the official tool for installing Python packages. It has access to an extremely large number of package databases from which you can install a variety of Python software. Conda allows you to embed separate versions of pip inside your environments, making it even easier to get access to software that conda may not host in its databases.
With the correct environment activated, you can:
$ (myEnv) [user@machine ~] conda install -y pip
to get pip. From now on, when using the myEnv environment, you will have access to packages available through
pip install. If you want all of your environments to have access to pip, install it in the root environment (i.e. deactivate all environments and then use
Node Version Manager is...well...a nodejs version manager! You can view it on github here
TODO: Examples and usage
File System Resources
By enrolling in a Middlebury CS course, a CS user account is created for you. This account inherits your Middlebury username and password, but has resources that are distinct from your college account.
The account that is generated for you is a typical Unix user account. This means it has the standard folder structure you would expect on a Linux or Mac machine (i.e. Desktop, Documents, Pictures, etc).
You are the owner of your account and all files contained within your subdirectory. You are free to delete folders/files inside your user directory as you wish (now, remember, "Just because you can, doesn't mean you should" -- but if you really have an itch to delete everything, you can). You can create subfolders for classes or projects whenever you want. You can upload files to or download files from your user directory freely. It is YOUR account.
By default, your folder is viewable by anyone on the network (permission level rwxr-xr-x, or 755). This means people who are not you can view and execute (but NOT write to) files inside your user directory. If you would like to change this, talk to Ruben or research the
One of the many folders created inside your home folder is your public_html folder. The purpose of the public_html folder is to house any and all documents that you want to be public-facing. This includes a webpage!
By default, this directory contains a single file,
<html><head> <meta http-equiv="content-type" content="text/html; charset=UTF-8"> <title>Empty page</title> </head><body> I have not yet set up my home page. </body></html>
The CS web server is pointed at each user's
~/public_html/index.html file. Consider this your "homepage" (i.e. if someone types in the url
www.cs.middlebury.edu/~<your_username>, they will land on your index.html page). Any pages you want to link to from your homepage must be within your public_html folder.
HTCondor is a specialized workload management system that excels at providing high-throughput computing via collections of distributively owned computing resources.
What does that mean?!
Condor uses otherwise unused CPU cycles to compute various sets of jobs.
What does that mean with respect to Midd CS?!
The MBH 632 lab is full of machines that are often idle or under light use. The condor system allows us to utilize the unused computing power of the lab to 1) run processes that we may otherwise not have the computing power to perform efficiently and/or 2) drastically shorten the amount of time it takes to run large numbers of iterations of the same set of code and/or 3) run a job away from a personal machine (eliminating hazards such as accidentally killing a job by letting your laptop sleep, etc). In the following sections, I look to provide a summary of what the system entails, as well as how you can use it.
NOTE: This guide is designed to be a reference, not a full-fledged usage-manual. If you want assistance with your first usage of condor, do not hesitate to reach out to Ruben
What is Condor?
Before we start throwing code at condor, we should understand, in a general sense, what the system is and how it is setup. Here is a link to the full condor manual, if a daring soul is interested. Be warned, it is an extremely long set of documents (1146 pages to be exact), much of which is not pertinent to you using condor.
From any MBH 632 lab machine console, you can run the command:
$ [user@lab-machine ~] condor_status
This is, expectedly, polling the condor system for its current status. You should receive output looking something like the following (NOTE:
condor_status is a one-time polling command. It does not auto-update):
Name OpSys Arch State Activity LoadAv Mem ActvtyTime email@example.com LINUX X86_64 Owner Idle 0.000 1994 2+16:35:36 firstname.lastname@example.org LINUX X86_64 Owner Idle 0.000 1994 2+16:35:37 email@example.com LINUX X86_64 Owner Idle 0.000 1994 2+16:35:38 firstname.lastname@example.org LINUX X86_64 Owner Idle 0.000 1994 2+16:35:39 email@example.com LINUX X86_64 Owner Idle 0.000 1994 2+16:35:40 firstname.lastname@example.org LINUX X86_64 Owner Idle 0.000 1994 2+16:35:41 email@example.com LINUX X86_64 Owner Idle 0.000 1994 2+16:35:42 firstname.lastname@example.org LINUX X86_64 Owner Idle 0.000 1994 2+16:35:35 email@example.com LINUX X86_64 Unclaimed Idle 0.000 1994 0+00:04:35 firstname.lastname@example.org LINUX X86_64 Unclaimed Idle 0.000 1994 0+00:05:05 email@example.com LINUX X86_64 Unclaimed Idle 0.000 1994 0+00:05:06 firstname.lastname@example.org LINUX X86_64 Unclaimed Idle 0.000 1994 0+00:05:07 email@example.com LINUX X86_64 Unclaimed Idle 0.000 1994 0+00:05:08 firstname.lastname@example.org LINUX X86_64 Unclaimed Idle 0.000 1994 0+00:05:09 email@example.com LINUX X86_64 Unclaimed Idle 0.000 1994 0+00:05:10 firstname.lastname@example.org LINUX X86_64 Unclaimed Idle 0.000 1994 0+00:05:03 email@example.com LINUX X86_64 Unclaimed Idle 0.010 1994 0+00:04:34 firstname.lastname@example.org LINUX X86_64 Unclaimed Idle 0.000 1994 0+00:05:05 email@example.com LINUX X86_64 Unclaimed Idle 0.000 1994 0+00:05:06 firstname.lastname@example.org LINUX X86_64 Unclaimed Idle 0.000 1994 0+00:05:07 email@example.com LINUX X86_64 Unclaimed Idle 0.000 1994 0+00:05:08 firstname.lastname@example.org LINUX X86_64 Unclaimed Idle 0.000 1994 0+00:05:09 email@example.com LINUX X86_64 Unclaimed Idle 0.000 1994 0+00:05:10 firstname.lastname@example.org LINUX X86_64 Unclaimed Idle 0.000 1994 0+00:05:03 . . . email@example.com LINUX X86_64 Unclaimed Idle 0.000 1994 0+00:04:34 firstname.lastname@example.org LINUX X86_64 Unclaimed Idle 0.000 1994 0+00:05:05 email@example.com LINUX X86_64 Unclaimed Idle 0.000 1994 0+00:05:06 firstname.lastname@example.org LINUX X86_64 Unclaimed Idle 0.000 1994 0+00:05:07 email@example.com LINUX X86_64 Unclaimed Idle 0.000 1994 0+00:05:08 firstname.lastname@example.org LINUX X86_64 Unclaimed Idle 0.000 1994 0+00:05:09 email@example.com LINUX X86_64 Unclaimed Idle 0.000 1994 0+00:05:10 firstname.lastname@example.org LINUX X86_64 Unclaimed Idle 0.000 1994 0+00:05:03 Machines Owner Claimed Unclaimed Matched Preempting X86_64/LINUX 248 8 0 240 0 0 Total 248 8 0 240 0 0
In more sophisticated setups of Condor, the output of
condor_status may look less uniform. For our purposes, though, we have a lab with 31 identical machines. Therefore, we are going to see a long list of similar information. The info we get is broken down into the following categories:
- Name specifies the slot (thread) and name of the machine.
- The MBH 632 lab machines have 4 core, 8 thread CPUs, which Condor identifies as "slots". Each slot can have a job matched to it.
- OpSys is the operating system of the machine.
- All of the lab machines are running Fedora, so we see LINUX.
- Arch is the architecture of the machine's CPU.
- All of the lab machines have 64-bit Intel i7 CPUs using the Intel x86 instruction set (think CS202!)
- State refers to a machine's current activity.
- The only states you really need to know about are Owner, Unclaimed, and Claimed.
- Owner references the Central Manager machine.
- Unclaimed means that particular slot of the machine is available to handle a condor job
- Claimed means that particular slot of the machine is claimed by a condor job (and may, or may not, actually be running it)
- The only states you really need to know about are Owner, Unclaimed, and Claimed.
- Activity specifies what the slot is currently doing. You will likely only ever see Idle or Busy here, which are self-explanatory.
- LoadAvg denotes the percentage usage of that slot (not very useful for non-admins)
- Mem is the amount (in MB) of memory (RAM) available to that slot.
- Each lab machine has 16 GB of RAM. The 16 GB (16384 MB) of RAM has roughly 15954 MB of usable memory. Divide that among 8 slots and you get 1994 MB of memory per slot.
- There are ways to specify the need for more memory in a condor job, but, for our purposes, 1994 MB per slot is plenty (and you don't really get a choice because all the machines are the same!)
- ActvtyTime is just noting how long a machine has been in its current state (not very useful for non-admins).
Types of Machines
In most condor pools (including the Middlebury pool), there are 3 types of machines: Central Manager, Submit, and Execute. The major distinction between each type of machine lies in which condor daemons are running.
The Central Manager machine is solely responsible for resource allocation in the condor pool. When a job is submitted, it is the Central Manager's role to "advertise" the job to all execute machines (i.e. the Central Manager looks at the job request and compares it to the qualities of every execute machine until it finds a viable match, at which point it ships off the job to be executed). In the Midd configuration, the Central Manager machine is
abe.cs.middlebury.edu. This machine will never run any jobs (for the sake of security). This is why it holds the specific state of Owner, and will never become Unclaimed.
At least one machine needs to be able to submit jobs to the condor pool. In the Midd configuration, there is exactly one (1) machine that can submit jobs to the pool:
camelshump.cs.middlebury.edu (the machine closest to the printer). Submit machines can also be execute machines (in the Midd configuration, this is true -- camelshump submits jobs, but can also execute them).
Execute machines are exactly that: machines that execute jobs. They stand-by in an idle state until they receive a request from the Central Manager.
How to Submit a Job
- Step 0: Make absolutely sure your code is free of infinite looping and/or infinite recursion errors. Admins will not be actively monitoring the queue, and therefore won't necessarily catch jobs that have been running for an abnormally long amount of time unless alerted of it. User's have the ability to remove their own jobs from the queue -- See: How To Remove a Job. If you accidentally submit buggy code, please be kind to anyone else wanting to use the system and remove the buggy job from the queue.
- Step 1: Ensure you are connected to the Submit Machine with your Middlebury account. This can be either locally or remotely via SSH.
- If you attempt to submit a job from a non-submit machine, you will be met with:
ERROR: Can't find address of local schedd. That's condor's way of telling you that you submitted from the wrong machine.
- If you attempt to submit a job from a non-submit machine, you will be met with:
- Step 2: Make sure your environment is prepared.
- Step 2: Write a Submission File
- Step 3: Submit your job to the condor pool:
$ [user@camelshump ~] condor_submit your_submit_file.sub
After a moment, you should get a response that your job(s) has been submitted.
Preparing Your Environment
The Middlebury condor system has been configured to recognize that your CS account uses a shared file system (i.e. you can log in to any machine and see all of the same files). Therefore, it will generally benefit you to setup a working environment that is centered around your user account (as opposed to a particular machine).
What does that mean? Let's use Python as an example:
- Let's say you have a complicated application that relies on some 3rd party modules (Scipy, Numpy, Beatiful Soup, etc) that are installed to your laptop's Python installation, as well as some user-created modules (myMod1.py, myMod2.py, etc) that are stored on your laptop in the same directory as your main Python file. If you were to login to a lab machine and run your main Python file (assuming it's supporting user-created modules are in the same directory), it might not work. The local installation of Python on the lab machine may not have the same 3rd party modules installed as your laptop's local Python installation. This means if condor were to ship your main Python file off to a bunch of machines, depending on what modules you need, your app could crash.
What is the suggested solution to avoid this? Specifically for Python, I recommend setting up a Miniconda virtual environment local to your user account with all of the modules you need. This way, no matter what machine your code is shipped off to, it's still tied to your user account (which will have the tools it needs). Really, this is a long way of saying, "If you can't log in to a random lab machine and have your code work without touching anything, you need to do some preparation".
The approach would be similar with other languages -- just use an application-level package manager that supports whichever language you are using.
In order for condor to run your code, it must be in an executable format. This means one of two things must be true:
- If you are using a compiled language, your code must be pre-compiled into its executable form
- Example: Java code must already have been passed through javac to produce a .Class file.
- Example: C code must already have been passed through gcc to produce an executable file.
- If your language of choice does not produce an executable (i.e. relies on an interpreter), it must provide the proper shebang (#!) line so condor knows which interpreter to invoke.
- Example: a shell script must include
- Example: the main Python file must include
- Example: a shell script must include
Lastly, in order for condor to be able to run whichever file you pass it as your "executable" (pre-compiled, or interpreted), it must have the proper permissions. Generally speaking, you will likely be the one that owns the file you are passing to condor ("you" meaning the account associated with your username). In the directory with your file, you can use the command:
$ [you@camelshump ~] ls -l
to view the permissions of all files in the current directory. This will output something like:
-rw-rw-r-- 1 you you 22 Sep 6 13:08 example.py
Looking at the left side set of characters, this is reporting that the owner (you) has permissions to read and write to the file. Anyone in the group "you" has permissions to read and write to the file. And, anyone else only has permission to read the file. We need to change this so that everyone can run this file (specifically, the condor user needs to be able to). So, run the command:
$ [you@camelshump ~] chmod 755 example.py
and you will notice the file changes to the following:
-rwxr-xr-x 1 you you 22 Sep 6 13:08 example.py
If you need any assistance preparing your environment, setting up your files, or you have need of software that isn't available/easily configurable via package managers, talk with Ruben
Writing a Submission File
Submission files are what drive the user-experience of condor.
At a minimum, a submission file needs to provide condor with:
- An executable command
- A queue command
All other commands are optional. For a full reference of commands, see: the condor_submit manual page. There are a lot of them. Below, I look to break down the commands that you will likely find useful, as well as commands that you must use for the Middlebury configuration.
Before you do anything, you will need to create a file (usually in the same directory as your executable). By convention, the name of your submission file should be
something_related_to_your_code.sub. This, along with your username, will be what shows up in the condor queue.
Commands are written in the form:
<command_name> = <value>
- For Java code, you will want
universe = java
- For just about everything else, you will want
universe = vanilla
- For Java code, you will want
executable = <your_executable_file>
- For Java, your executable will be your .Class file
- Tells condor how many copies of the current job you want to put in the queue. Defaults to 1.
- We want condor to specifically use your environment variables when running your application.
getenv = True
- You are allowed to specify any number of specific requirements for your job. We need to tell condor to use our shared file system.
requirements = TARGET.UidDomain == "cs.middlebury.edu" && TARGET.FileSystemDomain == "cs.middlebury.edu"
- Provides command line arguments to your program (space delimited, all one string).
- If you are running a Java universe job, you must supply at least one argument, where the first argument is the class name (i.e. Example.class must be provided Example as an argument)
arguments = "arg1 arg2 arg3"
- Create a log file of what condor is doing (useful if something goes wrong with condor)
log = <some_name>.log
- Condor is a non-interactive system (i.e. it can't take in input or print to stdout). It's substitution for this is generating an output file.
output = <some_name>.output
- Similar to output, if your code expects user-input at any point, you can provide pre-written input in a file that will be taken as if entered via stdin.
input = <some_name>.input
- Condor cannot report to stderr, either. It can write any errors to an error file, though (useful if something is wrong with your code)
error = <some_name>.error
- notification and notify_user
- Can be used in conjunction with one another to send you an email when your job(s) is done running.
- BE WARNED: THIS WILL SEND YOU AN EMAIL FOR EVERY COMPLETED JOB, NOT JUST ONE EMAIL PER SUBMISSION. YES, THAT MEANS IF YOU QUEUE 1000 JOBS TO STRESS TEST THE SYSTEM AND LEAVE THIS OPTION SELECTED, YOU WILL GET 1000 EMAILS! RUBEN DEFINITELY DID NOT DO THIS!
notification = Always
notify_user = <username>@middlebury.edu
- Allows you to specify a new path for input and output files to be taken from/generated. Useful when running the same job many times (nicely segments your output files, etc).
- The directory(ies) need to be created beforehand. Let's say you want to run your code 50 times. You could make a testruns/ directory and inside testruns make 50 folders labeled run0/ to run49/. Then, using a condor predefined variable $(Process), each run's data will be put in the corresponding numbered folder.
initialdir = testruns/run$(Process)
Example Submission File
Let's say you have a Python file called example.py. You want to run this code one time. Here is a potential submission file
universe = vanilla executable = example.py log = example.log output = example.output error = example.error getenv = True notification = Always notify_user = <username>@middlebury.edu requirements = TARGET.UidDomain == "cs.middlebury.edu" && \ TARGET.FileSystemDomain == "cs.middlebury.edu" queue
Monitoring Your Submission(s)
You've just submitted a condor job! Now what?!
Well, you could just kick back and relax until you get an email that your job is done. Maybe it's been a while and you're wondering what happened to your job. How can you see what your job is doing?
$ [user@lab-machine ~] condor_q
You can find the full documentation for condor_q here For a standard user, this command will output all jobs they have submitted to the condor queue. Let's say (in a contrived example), you just submitted a job to run sleep.py (a program that literally just sleeps for 30 seconds). You can call
condor_q to get the following:
-- Schedd: camelshump.cs.middlebury.edu : <184.108.40.206:9618?... ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD 3.0 username 9/6 14:32 0+00:00:03 R 0 0.0 sleep.py 1 jobs; 0 completed, 0 removed, 0 idle, 1 running, 0 held, 0 suspended
- ID is a user-specific identification number for the current job. If you queue 10 jobs in the same submission, you'd get a list of 10 jobs all sharing the number before the decimal, differing in the number after the decimal.
- OWNER should be your username, since you are looking at your queue!
- SUBMITTED is the date and time you submitted the job.
- RUN_TIME is how long the job has actively been running for.
- ST is the status of the job. You will likely only ever see I, R, and H.
- I means the job is idle. Generally, this would mean there either isn't a machine available yet, or the Central Manager is currently looking for an available machine.
- R means the job is currently running.
- H means the job is being held in queue. Usually, this is a sign that something is wrong with your submission (e.g. a match can't be made based on requirements, wrong universe is being used, etc). Review your submission and if you don't see anything standing out, talk with Ruben.
- PRI is the priority of the job with respect to the entire condor queue. Should always be 0.
- SIZE the peak amount of memory (RAM) the job has used.
- CMD the name of the executable driving the job.
The bottom of the output will give you a synopsis of all your jobs that are in queue. The output of the condor_q command can be modified and formatted differently. See the manual page for examples.
How to Remove a Job
Removing jobs from the condor queue is driven by the command
There are two quick ways to remove jobs from your condor queue using
If you have submitted a bunch of jobs to condor, and you just want to get rid of them all, you can bulk remove ALL jobs you have submitted by running:
$ [<username>@lab-machine ~] condor_rm <username> All jobs of user "<username>" have been marked for removal
By Process ID
If there is a specific job that you want to remove from the queue without removing other jobs, you can provide a process ID as an argument. Poll
condor_q to get the job's ID, then call:
$ [<username>@lab-machine ~] condor_rm <job-id> All jobs in cluster <job-id> have been marked for removal
Q: Can I run parallel applications on this system?
A: Not yet! Soon™.
Q: I submitted a job, but it won't switch off idle, what gives?
condor_status and see if there are any unclaimed machines. If all machines are claimed, you'll have to wait your turn! If there are unclaimed machines and your job is still sitting idle, try running the command
condor_q -analyze to get a more verbose report of what's going on with your queue.
Q: I submitted a job and it is stuck in a held state, what do I do?
condor_q -analyze to see what the problem is. Double check that there are no errors in your submission file.
Q: How do I remove a specific piece of a cluster?
A: Take a look at the constraints flag for the condor_rm command. You can reference the ClusterId and ProcId ClassAds.
A: That's not a question, but email Ruben anyway!
NOTE: As of Windows 10, ssh is supported on Windows through the PowerShell application -- PuTTY still works, but you may not need it
PuTTY is an open source SSH client developed for the Windows platform. You can download it here (just the putty.exe binary form will do, but if you are feeling ambitious you can download the .msi installer for all of the tools).
The PuTTY client has many options for customization (similar to optional arguments with the ssh command). But, to get the basic usage out of it, all you need to do is supply the full hostname of the machine you want to connect to.
If the connection can be made, you will be prompted with a window asking you the username that you would like to login with. After entering a username, you will be prompted for the password associated with the username. Upon successful authentication, you should see a terminal-esque window like what you would see in a Unix environment.
Secure CoPy is a command that combines the ssh command with the cp (copy) command. SCP can be used to push local files to a remote server, or it can be used to get a file from a remote server and save it locally.
You can find the manual page here or by typing "man scp" in a terminal window.
NOTE: This command only works for UNIX environments. For Windows, see the pscp utility provided by PuTTY
$ [user@my-machine ~]: scp path/to/file/to/send <username>@remote-location:path/to/location/to/put/file
$ [user@my-machine ~]: scp <username>@remote-location:path/to/file/to/get path/to/location/to/put/file/locally
$ [user@my-machine ~]: scp ./Documents/my_homepage.html email@example.com:~/public_html/homepage/
$ [user@my-machine ~]: scp firstname.lastname@example.org:~/cs101/homework1.py ~/Documents/this_semester/cs101/
Tip: You can select multiple source files in one scp command. SCP will evaluate as many files as you select until it reads a destination location (i.e. if you provide many local files, it knows to continue reading local files as sources, and when it reads a remote location, that is the destination, and vice versa). You can also supply the -r argument to recursively push entire directories.
Secure SHell is a network protocol that allows remote console (i.e. terminal) login from one machine to another. The Middlebury CS department machines all support SSH from the on-campus network. In addition, there is one machine (basin) that is specifically given a hole in the Middlebury firewall to allow off-campus connections.
You can find the manual page for the SSH command here or by typing "man ssh" on a Mac or Linux terminal. From the manual, the SSH command looks complicated; it's not! There are many optional arguments supported, but to get basic functionality all you need to supply to the command is the username you want to connect with and which machine you want to connect to. If the connection can be established, you will be prompted for the password of the account you are trying to connect with. If the connection cannot be established, you will be given some form of a "cannot resolve hostname" or "connection timed out" error (usually this means the machine is either disconnected from the network, or powered off).
NOTE: The examples below are all in UNIX format. To use SSH from a Windows machine, see the #PuTTY section.
The first time you remotely connect to a machine, you will be given a warning that the authenticity of the machine you are trying to connect to cannot be verified. Assuming you have correctly entered the name of a Middlebury managed machine or a Middlebury IP address, you can safely ignore this and enter "yes". In the grand scheme of things, if it's your first time connecting to a machine and you are expecting to get this response, you can usually ignore it and enter "yes". But, for the sake of completeness, you should be aware that spoofing is a thing.
$ [user@my-machine ~]: ssh <username>@machine-name
$ [user@my-machine ~]: ssh <username>@ip-address
$ [user@my-machine ~]: ssh <username>@killington.cs.middlebury.edu
$ [user@my-machine ~]: ssh <username>@220.127.116.11
Tip: If you are off-campus and need to connect to a specific machine, you can tunnel through basin to the machine you need with two ssh commands.
$ [user@my-machine ~]: ssh <username>@basin.cs.middlebury.edu
$ [username@basin ~]: ssh <username>@killington.cs.middlebury.edu
$ [user@my-machine ~]: ssh -t <username>@basin.cs.middlebury.edu ssh <username>@killington.cs.middlebury.edu
Tmux is a Terminal MUltipleXer. It allows for multiple consoles within a single window, as well as the ability to detach and reattach processes from a single session. You can find the manual page here or by typing "man tmux" in a console.
NOTE: Tmux is a UNIX-only command. While untested by this author, the popular Windows-equivalent is ConEmu (short for Console Emulator). It allows for multiple Command Prompt or PuTTY sessions to be emulated alongside one another.
NOTE 2: If you are a Mac user and would like this utility installed on your machine, go here. It is installed on all CS Linux machines by default.
Keyboard shortcuts can be edited in the file ~/.tmux.conf. Here is a popular community cheat sheet.
Tmux commands come after what is referred to as a "command prefix". By default, "ctrl" + "b" is the command prefix. You can edit your prefix in #Keyboard_Shortcuts. In this tutorial, "ctrl" + "b" will be shortened to
CB, and commands will be written in the format
CB --> <key_to_press_after_prefix>.
You can achieve the most basic functionality of Tmux simply by calling it:
$ [user@my-machine ~]: tmux
Nothing will appear to happen, except the bottom of your console should change color. This means you are in a tmux session with one pane.
Let's split our single tmux pane horizontally.
CB --> "
Now we have 2 consoles, one on top of the other. These consoles are independent of one another. You can use one to ssh to a remote server, and the other to search for local files on your machine, for example.
You can swap panes with
CB --> o or, you can allow tmux to read mouse input by inserting the line
set -g mouse on into your .tmux.conf file. You can split panes as many times as you want (or until they become illegible!) with
CB --> " for horizontal and
CB --> % for vertical.
You can detach a tmux session from the current console with
CB --> d (or by typing
tmux detach). This means the process remains running, but as a distinct program separate from the current console window (useful, for example, if you are ssh'd into a server and want a process to continue running after you logout). You can reattach a tmux session with:
$ [user@my-machine ~]: tmux attach
If you intend to have multiple sessions of tmux running inside a single console session, you can name, attach, detach, and switch between them:
$ [user@my-machine ~]: tmux new -s session1
$ [user@my-machine ~]: tmux detach
$ [user@my-machine ~]: tmux new -s session2
$ [user@my-machine ~]: tmux detach
$ [user@my-machine ~]: tmux attach -t session1
$ [user@my-machine ~]: tmux switch -t session2
If you don't remember all of the sessions you have running, you can use the call:
$ [user@my-machine ~]: tmux list-sessions
to remind you what they are all called.
To kill the current pane use
CB --> x. This will prompt at the bottom of your screen for a y/n to confirm.
There is so much more you can do with tmux, but this is just some basic functionality to get you up and running.