unknown_document

We couldn't find that page... Were you looking for one of these?


Shell Scripts
The ability to execute commands from the command line provides a great deal of power and efficiency, but executing common commands can get repetitive. In these cases, shell scripts allow groups of commands to be executed together with a single command. As a very simple example of how to execute shell scripts, lets repeat the "Hello World" example from the previous section. First, create a file named "myscript.sh" with the following contents: #!/usr/bin/bash echo Hello World! The first line of this file is called the shebang , which starts with #! followed by the path to the interpreter that should execute this script, /usr/bin/bash . Now, you can execute this script using the bash command, by executing bash myscript.sh as follows: ninja$: bash myscript.sh Hello World! ninja$: In this example the bash command is used to execute the commands contained in myscript.sh . Shell scripts can also be executed directly by calling the full path to the script. If the script is location in the current working directory, then the script name can be prefixed with a . , which is an alias to the current working directory a shortcut. Let's try: ninja$: ./myscript.sh bash: ./myscript.sh: Permission denied ninja$: Whoa, what happened? Executing files can present security risks, so permission must be explicitly granted before a script can be executed. This is accomplished using the chmod command as follows: ninja$: chmod u+x myscript.sh ninja$: In short, chmod is short for "change mode". In this case we want to add executable (x) permission to the current user, which is stated as "u+x". Finally, we state the name of the file to apply this change to. Now, let's try executing the script again: ninja$: ./myscript.sh Hello World! ninja$: With that, we have enough background information to start learning the fundamentals of working with the command-line in the following sections.
What is the Shell?
As described in the previous section, when a terminal is opened it runs software that provides access to the computer's operating system. This software has became known as "the shell ". Let's take a quick at how that name came to be used. At a high level, operating systems are designed with two distinct parts: A first part which contains the low-level software that has complete control of system resources. A second part which "surrounds" the first to prevent access to it, except for specific, pre-defined actions that application software are allowed to take. This concept has been visualized as a nut contained within its shell, where the meat of the nut (the " kernel ") contains the low-level software, and the hard shell that surrounds it prevents outside access except for specific, pre-defined " system calls ", such as those defined by " the Linux API . With this in mind, when users open a terminal they are presented with an application that accepts commands that are typed in the terminal, executes them, and returns the output to the terminal interface. This application provides users an interface to make system calls, call other applications, and execute scripts that define higher-level interactions with the operating system. Since these "shell applications" allow users to interact directly with the shell, they have become synonymous with "the shell". On Linux systems, the most common shell application is Bash , which is often installed by default, though other popular shell applications include ksh , zsh , and fish . Since Bash is the most common shell we will use it for examples, but most shells feature backwards-compatibility with Bash so most examples should work across other shells. As terminals evolved from hardware to software, the boundaries between the terminal and the shell became increasingly blurred, to the point where the concepts of "the terminal", a "terminal emulator", "the console", and "the shell" have become interchangeable in many conversations, which has led to confusion about what each term actually refers to. Possibly the most common term for these concepts among users is "the command-line", which is the topic of the next section.
Pipelines
We saw in the previous section that by default commands (generally) print their output in the terminal window. This section introduces the concept of "pipelines", where the output from one command is passed as input to the next command, in order to perform more complex operations. This concept can be visualized using the following diagram, which shows a pipeline made up of 2 commands: This pipeline can be executed from the command line as follows: {command 1} | {command 2} where the "|" symbol is called a "pipe". This pipeline can be extended to n commands by "piping" the output to additional commands, using: {command 1} | {command 2} | {command 3} | ... | {command n} Example Pipeline As simple example, suppose we have a file containing a list of color names, and we want to generate a sorted list of 5 randomly-selected color names from that list. This can be achieved by combining the following commands: Execute the shuf command to read colors.txt from the file system, shuffle the lines, then pass the result to stdout , then Pipe that result to the head command to limit the result to 5 lines: ninja$: shuf colors.txt | head -n 5 MistyRose Blue OldLace MidnightBlue MintCream ninja$: Next, pipe that result to the sort command to sort the lines: ninja$: shuf colors.txt | head -n 5 | sort Blue MidnightBlue MintCream MistyRose OldLace ninja$: which achieves our original goal.
Streams
We have seen how to execute commands , and how to build pipelines that pass information between commands to perform more complex tasks. In this section, we will touch on how this information is passed between commands, using " streams ". A stream refers to the information that flows through the pipeline, from command to command. A command receives information from the "input stream", processes it, then passes the result to the "output stream" (or possibly sends error information to the "error stream", if needed). To help solidify the concept of a stream, suppose there is a command that reads text from the input, modifies it in some way, then passes the result to the output. One might expect the process of executing that command to be: Read the input into memory Modify the content in memory Pass the modified content to the output However, consider two cases: Suppose that the input comes from a file that requires more memory than is available in the system. In this case it would be impossible to load the entire file into memory, and the command would fail. Suppose that the input comes from a continuous source, such as a temperature sensor that reads at regular intervals. Since the temperature is constantly being monitored, there is no such concept as "the end of the input". In order to handle these situations effectively, the process is a bit more like this: open the input source read a line of data if the end of the input stream is detected, stop processing and close the stream perform some operation on that line and pass the result to the output repeat from step 2 With this in mind, once the input stream is opened and a line is read, the command doesn't know how much data is contained in the input; it simply reads a line, processes it and passes the result to the output, then reads another until (possibly) all input data has been consumed. Similarly, the next command in the pipeline simply receives each line of data, processes it, then passes the result to its output. This flow of lines of data led this concept to be called a "stream". Standard Streams When any of input , output , or error are not specified, they each default to a specific "standard" stream. Under the hood, each stream is implemented as a file that can be read from or written to. There are three standard streams: Standard Input Standard input, called stdin and sometimes referenced numerically as "0", is the stream from which a program reads its input data, if not otherwise specified. Not all commands require an input stream. For example, the ls command, which displays information about files contained in a directory, reads input from the filesystem without any input data stream. Standard Output Standard output, called stdout and sometimes referenced numerically as "1", is the stream to which a program writes its output data. By default this is usually connected to the terminal, so that the results of a command are printed to the screen. Not all commands generate output. For example, the mv command, which renames a file on the filesystem, does not generate any output when it is successfully invoked. Standard Error Standard error, called stderr and sometimes referenced numerically as "2", is an alternative output stream that used by commands for error or diagnostic information. The main purpose of stderr is to allow a command to generate diagnostic feedback without polluting the output stream. Note This section is intended to provide just enough information to allow users to begin understanding and using streams in many common applications. If you want to dig a bit deeper into the streams and how they work a good place to start is Everything is a File . One of the useful features of the standard streams is that they can be replaced by other streams, combined etc, sent to in a process called redirection , which is the topic of the next section.
Commands
Overview At a high-level, command-line commands represent a piece of code that: Optionally receives input data, then Optionally performs an operation on that data, then Passes the (possibly modified) data to the output, and Optionally generates an error message. The following diagram provides a useful representation of this: The "natural" flow goes from the input, to the command, then to the output. If an error occurs, then information about that error is sent to the error. With this diagram in mind, let's proceed to see some examples that will help solidify this concept. Executing Commands Commands are executed from the command-line by typing the name of the command, possibly followed by options and/or arguments. Arguments provide the command with information that it needs to perform its task. For example, a command that operates on a file might be passed the name of the file to perform those operations on. Options provide "knobs" that allow the user to customize how the script behaves. The combination of command, arguments, and options can be combined into a call signature , which describes the format to be used when calling the command. As a simple example, the echo command is often called with a single string as an argument: ninja$: echo Hello World! Hello World! ninja$: which takes the argument (Hello World!) and passes it to the output which, by default, prints the output to the terminal screen. In the next section we will see how to use this output as the input for other commands.
Redirection
By default, information flows through the standard streams , but can be directed to other other locations. For example, we already saw that pipelines can be used to direct the stdout of one command to the stdin of another. Redirection provides a mechanism through which standard streams can be combined and/or sent to another location. For example, by default stdout is generally connected to the terminal, so that command output is displayed in the terminal. Suppose you want to save the command output to a file. One option would be to copy and paste it from the screen, but this is manual and error-prone. In this case, it makes more sense to redirect the command's output to a file. Redirecting stdout Redirecting stdout can be achieved using the following call signature: {command} > {filename} where command is the command to execute, > indicates that stdout should be redirected, and filename is the file to direct the output to. Let's see this in action: First, repeat the simple "Hello World!" example and redirect the output to output.txt : ninja$: echo Hello World! > output.txt ninja$: Note that unlike the original example the "Hello World!" text is not displayed in the terminal. Let's now cat the output file to see the content: ninja$: cat output.txt Hello World! ninja$: Note that if filename already exists, it will be overwritten, which may not be what is desired. As an alternative, you can append output to the specified file using the following call signature: {command} ]] {filename} Let's execute that command: ninja$: echo Hello Again! >> output.txt ninja$: then see the result: ninja$: cat output.txt Hello World! Hello Again! ninja$: Redirecting stdin In a similar way, we can also redirect content to a command's stdin using the call signature: {command} < {filename} which is similar to the previous case, except the < indicates that stdin should be redirected. As expected, executing this sorts this lines from output.txt , sorts them, and prints the result to the terminal: ninja$: sort < output.txt Hello Again! Hello World! ninja$: Note that there are two additional variations of stdin redirection, which support here documents and strings , but will not be discussed here. Multiple Redirection Both stdin and stdout can be redirected by combining these two features into a single call, using the call signature: {command} < {input file} > {output file} As an example, let's now sort our file in reverse and redirect the result into a new file, reversed.txt : ninja$: sort --reverse < output.txt > reversed.txt ninja$: Now we can display the new file contents: ninja$: cat reversed.txt Hello World! Hello Again! ninja$: which confirms that the command executed as expected. Now that we have learned the basics of working with the command line, let's next look at how to combine multiple commands into shell scripts , which we can execute like a single, customized command.
What is the Command Line?
As described in the previous section , the concepts of the terminal, a terminal-emulator, the shell, etc have all become interchangeable with the "command-line" in many contexts. The command line is the text interface to your computer, named after (well...) the line on the screen where the user can execute commands. To demonstrate, let's see how to execute the ubiquitous "hello world!" example: ninja$: echo Hello World! Hello World! ninja$: In this example, we executed the echo command to pass the string "Hello World!" to stdout . More broadly, the command-line is the central component of the Command-line Interface (CLI) that creates the user interface in many terminal applications. Next, let's take a look at some of the more common commands .
Getting Help in Neovim
Neovim includes an extensive help system, which provides a significant amount of detail about virtually any vim-related topic. To open the help system, from Normal mode enter the command : :help Which splits the current window and displays a buffer containing the main help page. We will review splits shortly in the windows chapter. navigate the help window as you would any buffer , for example using j , k , etc. Help for a specific topic The help contents for a specific topic or command by adding the topic or command when invoking the help system. For example, to review the documentation for the help system itself: :help help Following links Help contents often contain links to other help topics. To review the linked content, move the cursor over the link and type C-] . Changing topics Once in the help system you can manually jump to other topics using the tag command: :tag [topic] where topic refers to the help topic you want to jump to. Returning to topics Return to the previous help topic by invoking C-T . Exiting the help window Exit the help window and return to the original window by typing C-W c or :quit .
Variable Scope in Lua
We learned in the previous section that variables consist of both a name and a value , and that after a variable has been declared, it is possible to "re-assign" values to the name . The important detail here is that each variable requires a unique name . Let's now take a look at an important aspect of variables, scope . Global Scope In the early days of programming, all variables were what we now call global variables , or variables that have (or exist in the) "global scope" . The global scope can be thought of as a single bucket of uniquely-named variables, that are accessible from anywhere. Programmers quickly discovered that this created a variety of problems, ranging from simple annoyances such as the need to define (and keep track of) many similarly-named variables to avoid "name collisions", to accidentally-created and difficult-to-find bugs related to different parts of the code operating on the same variables. Although global variables are generally discouraged today, they are supported by most languages, including Lua. In fact, variables created in Lua are global by default. While global variables might seem convenient, they lead to code that can be difficult to understand, debug, and maintain, which led the concept of scope and local scopes. Local Scope While we have described the global scope as a big bucket of variables, local variables can be thought of as smaller buckets of variables that are placed into that big global bucket. Local variables have their scope limited by the "location" in which they are defined: local variables defined in a file have their scope limited to that file. This allows the same variable names to be used in different files without risk of collision. local variables defined within a function are local to that function, which allows functions to be easily used across different files. Within a file or function, local variables defined within a block (such as a control structure are local to that block. Let's take a look at a few examples: local x = "file scope" print ( x ) -- file scope -- start an "if-then" block if true then -- this block has its own scope local x = "block scope" print ( x ) -- block scope end -- back in the file scope print ( x ) -- file scope local fn = function () -- this function has its own scope local x = "function scope" return x end -- although we are back in the file scope -- this prints the function's return value print ( fn ()) -- function scope -- this prints x from the file scope print ( x ) -- file scope
Lua's String Type
The next type we will look at is the string . We will review Strings in more detail in the strings chapter, but as a brief introduction string are arrays of bytes, where each byte or group of bytes represents character, which are grouped together to form words or other text-based sequences. Lua strings are immutable , meaning that they cannot be modified after they are created. Changing a string requires creating a new string that consists of the characters of the previous string , plus whatever changes are desired. We will learn more about this when we look at string buffers and format strings . Lua strings are defined by sequences of characters contained in: Double quotes Single quotes Double square brackets The characters of strings defined with either single or double quotes must exist on a single line, while those in strings defined with brackets can be on multiple lines. Strings defined with single and double quotes are equivalent, and both types of quotes are supported so that quote characters can be included in strings themselves.