To prevent users from monopolizing system resources with runaway processes, we impose ulimits in a way that will lead to your processes dying mysteriously if they try to exceed them. You can see what limits are being imposed in your current login by running the following command, with output shown:
you@new:~$ ulimit -aS core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited max user processes (-u) 20 virtual memory (kbytes, -v) 100000
These limits are known as soft limits. Similarly, you can see your hard limits by running ulimit -aH. Soft limits are the limits that actually apply to your processes at a given time. You have the option to increase your soft limits to any values no greater than your hard limits. For instance, if you really need to run 21 processes at once instead of just 20, you can do this, because your hard limit is probably 50. The proper command is:
ulimit -Su 21
The -S indicates that you are setting a soft limit, and u is the name of a resource limit kind, taken from the output above.
You can even decrease your available resources, to put yourself in a self-imposed sandbox. For instance, you can lower your stack size hard limit to 1000 by running:
ulimit -Hs 1000
1. How Draconian! Why do we have these limits??
On our old server, we had multiple instances of users running benign yet out-of-control processes that ended up allocating all available memory. Shared daemons, like our web and mail servers, would then crash when trying to allocate memory, creating denial-of-service for everyone. The ssh daemon wasn't even able to work properly without available memory, so we sometimes needed to ask the techs at our hosting facility to reboot the server for us! It's always better safe than sorry when it comes to protecting users from breaking other users' services.