We couldn't find that page...

Were you looking for one of these?

What is the Shell?
As described in the previous section, when a terminal is opened it runs software that provides access to the computer's operating system. This software has became known as "the shell ". Let's take a quick at how that name came to be used. At a high level, operating systems are designed with two distinct parts: A first part which contains the low-level software that has complete control of system resources. A second part which "surrounds" the first to prevent access to it, except for specific, pre-defined actions that application software are allowed to take. This concept has been visualized as a nut contained within its shell, where the meat of the nut (the "kernel ") contains the low-level software, and the hard shell that surrounds it prevents outside access except for specific, pre-defined "system calls ", such as those defined by "the Linux API . With this in mind, when users open a terminal they are presented with an application that accepts commands that are typed in the terminal, executes them, and returns the output to the terminal interface. This application provides users an interface to make system calls, call other applications, and execute scripts that define higher-level interactions with the operating system. Since these "shell applications" allow users to interact directly with the shell, they have become synonymous with "the shell". On Linux systems, the most common shell application is Bash , which is often installed by default, though other popular shell applications include ksh , zsh , and fish . Since Bash is the most common shell we will use it for examples, but most shells feature backwards-compatibility with Bash so most examples should work across other shells. As terminals evolved from hardware to software, the boundaries between the terminal and the shell became increasingly blurred, to the point where the concepts of "the terminal", a "terminal emulator", "the console", and "the shell" have become interchangeable in many conversations, which has led to confusion about what each term actually refers to. Possibly the most common term for these concepts among users is "the command-line", which is the topic of the next section.
Streams
We have seen how to execute commands , and how to build pipelines that pass information between commands to perform more complex tasks. In this section, we will touch on how this information is passed between commands, using "streams ". A stream refers to the information that flows through the pipeline, from command to command. A command receives information from the "input stream", processes it, then passes the result to the "output stream" (or possibly sends error information to the "error stream", if needed). To help solidify the concept of a stream, suppose there is a command that reads text from the input, modifies it in some way, then passes the result to the output. One might expect the process of executing that command to be: Read the input into memory Modify the content in memory Pass the modified content to the output However, consider two cases: Suppose that the input comes from a file that requires more memory than is available in the system. In this case it would be impossible to load the entire file into memory, and the command would fail. Suppose that the input comes from a continuous source, such as a temperature sensor that reads at regular intervals. Since the temperature is constantly being monitored, there is no such concept as "the end of the input". In order to handle these situations effectively, the process is a bit more like this: open the input source read a line of data if the end of the input stream is detected, stop processing and close the stream perform some operation on that line and pass the result to the output repeat from step 2 With this in mind, once the input stream is opened and a line is read, the command doesn't know how much data is contained in the input; it simply reads a line, processes it and passes the result to the output, then reads another until (possibly) all input data has been consumed. Similarly, the next command in the pipeline simply receives each line of data, processes it, then passes the result to its output. This flow of lines of data led this concept to be called a "stream". Standard Streams When any of input, output, or error are not specified, they each default to a specific "standard" stream. Under the hood, each stream is implemented as a file that can be read from or written to. There are three standard streams: Standard Input Standard input, called stdin and sometimes referenced numerically as "0", is the stream from which a program reads its input data, if not otherwise specified. Not all commands require an input stream. For example, the ls command, which displays information about files contained in a directory, reads input from the filesystem without any input data stream. Standard Output Standard output, called stdout and sometimes referenced numerically as "1", is the stream to which a program writes its output data. By default this is usually connected to the terminal, so that the results of a command are printed to the screen. Not all commands generate output. For example, the mv command, which renames a file on the filesystem, does not generate any output when it is successfully invoked. Standard Error Standard error, called stderr and sometimes referenced numerically as "2", is an alternative output stream that used by commands for error or diagnostic information. The main purpose of stderr is to allow a command to generate diagnostic feedback without polluting the output stream. Note This section is intended to provide just enough information to allow users to begin understanding and using streams in many common applications. If you want to dig a bit deeper into the streams and how they work a good place to start is Everything is a File . One of the useful features of the standard streams is that they can be replaced by other streams, combined etc, sent to in a process called redirection , which is the topic of the next section.
Commands
Overview At a high-level, command-line commands represent a piece of code that: Optionally receives input data, then Optionally performs an operation on that data, then Passes the (possibly modified) data to the output, and Optionally generates an error message. The following diagram provides a useful representation of this: The "natural" flow goes from the input, to the command, then to the output. If an error occurs, then information about that error is sent to the error. With this diagram in mind, let's proceed to see some examples that will help solidify this concept. Executing Commands Commands are executed from the command-line by typing the name of the command, possibly followed by options and/or arguments. Arguments provide the command with information that it needs to perform its task. For example, a command that operates on a file might be passed the name of the file to perform those operations on. Options provide "knobs" that allow the user to customize how the script behaves. The combination of command, arguments, and options can be combined into a call signature, which describes the format to be used when calling the command. As a simple example, the echo command is often called with a single string as an argument: ninja$: echo Hello World! Hello World! ninja$: which takes the argument (Hello World!) and passes it to the output which, by default, prints the output to the terminal screen. In the next section we will see how to use this output as the input for other commands.
Pipelines
We saw in the previous section that by default commands (generally) print their output in the terminal window. This section introduces the concept of "pipelines", where the output from one command is passed as input to the next command, in order to perform more complex operations. This concept can be visualized using the following diagram, which shows a pipeline made up of 2 commands: This pipeline can be executed from the command line as follows: {command 1} | {command 2} where the "|" symbol is called a "pipe". This pipeline can be extended to n commands by "piping" the output to additional commands, using: {command 1} | {command 2} | {command 3} | ... | {command n} Example Pipeline As simple example, suppose we have a file containing a list of color names, and we want to generate a sorted list of 5 randomly-selected color names from that list. This can be achieved by combining the following commands: Execute the shuf command to read colors.txt from the file system, shuffle the lines, then pass the result to stdout, then Pipe that result to the head command to limit the result to 5 lines: ninja$: shuf colors.txt | head -n 5 Bisque DarkGreen SandyBrown Wheat Aquamarine ninja$: Next, pipe that result to the sort command to sort the lines: ninja$: shuf colors.txt | head -n 5 | sort Aquamarine Bisque DarkGreen SandyBrown Wheat ninja$: which achieves our original goal.
Lua Comments
Before we get into Lua itself, let's first talk about comments. Comments are text annotations in code that are ignored when the script is executed, but help people reading the code understand it. We use comments throughout our examples to explain what is happening a different points in the code, as well as to show the values of variables and other important information. Because comments are used to help document this behavior, let's start with explain what they are and how to read them. There are two types of comments in Lua: Line Comments Line comments are delimited with two dashes, as in --. When Lua encounters this delimiter, it ignores any content to the right of the delimiter. For example: -- This entire line is a comment print(123)--- The print statement executes, then this is ignored We will often use comments in examples to either: Explain the operations performed on a line, or Show the output or value of a variable on a line Block Comments Block comments are similar to line comments, except they apply to entire blocks of code. When there is an entire block of code to comment out, there are two choices. First, every line in the block case be commented with a line comment: -- local a = 123 -- -- print(a) -- -- a = 456 -- -- print(a) which is perfectly valid, though often less convenient than using a block comment: --[[ local a = 123 print(a) a = 456 print(a) ]] In the second example, the block comment starts with the --, which is followed by double-square brackets [[ which define the start of the block comment. Everything between the start of the block and the end of the block, defined by the closing double-square brackets ]], is considered a comment and ignored.
Redirection
By default, information flows through the standard streams , but can be directed to other other locations. For example, we already saw that pipelines can be used to direct the stdout of one command to the stdin of another. Redirection provides a mechanism through which standard streams can be combined and/or sent to another location. For example, by default stdout is generally connected to the terminal, so that command output is displayed in the terminal. Suppose you want to save the command output to a file. One option would be to copy and paste it from the screen, but this is manual and error-prone. In this case, it makes more sense to redirect the command's output to a file. Redirecting stdout Redirecting stdout can be achieved using the following call signature: {command} > {filename} where command is the command to execute, > indicates that stdout should be redirected, and filename is the file to direct the output to. Let's see this in action: First, repeat the simple "Hello World!" example and redirect the output to output.txt: ninja$: echo Hello World! > output.txt ninja$: Note that unlike the original example the "Hello World!" text is not displayed in the terminal. Let's now cat the output file to see the content: ninja$: cat output.txt Hello World! ninja$: Note that if filename already exists, it will be overwritten, which may not be what is desired. As an alternative, you can append output to the specified file using the following call signature: {command} ]] {filename} Let's execute that command: ninja$: echo Hello Again! >> output.txt ninja$: then see the result: ninja$: cat output.txt Hello World! Hello Again! ninja$: Redirecting stdin In a similar way, we can also redirect content to a command's stdin using the call signature: {command} < {filename} which is similar to the previous case, except the < indicates that stdin should be redirected. As expected, executing this sorts this lines from output.txt, sorts them, and prints the result to the terminal: ninja$: sort < output.txt Hello Again! Hello World! ninja$: Note that there are two additional variations of stdin redirection, which support here documents and strings , but will not be discussed here. Multiple Redirection Both stdin and stdout can be redirected by combining these two features into a single call, using the call signature: {command} < {input file} > {output file} As an example, let's now sort our file in reverse and redirect the result into a new file, reversed.txt: ninja$: sort --reverse < output.txt > reversed.txt ninja$: Now we can display the new file contents: ninja$: cat reversed.txt Hello World! Hello Again! ninja$: which confirms that the command executed as expected. Now that we have learned the basics of working with the command line, let's next look at how to combine multiple commands into shell scripts , which we can execute like a single, customized command.
What is the Command Line?
As described in the previous section , the concepts of the terminal, a terminal-emulator, the shell, etc have all become interchangeable with the "command-line" in many contexts. The command line is the text interface to your computer, named after (well...) the line on the screen where the user can execute commands. To demonstrate, let's see how to execute the ubiquitous "hello world!" example: ninja$: echo Hello World! Hello World! ninja$: In this example, we executed the echo command to pass the string "Hello World!" to stdout . More broadly, the command-line is the central component of the Command-line Interface (CLI) that creates the user interface in many terminal applications. Next, let's take a look at some of the more common commands .