I have a personal interest in automation, mostly because I believe that we should be trying to offload everything we possibly can onto computing systems so that we can spend our time thinking about interesting problems rather than manually repeating the same tasks over and over again. Good programmers, coders, and software engineers (depending on how you identify) should be trying to automate as much of their jobs as possible for all kinds of reasons: to help mitigate the bus factor, to get rid of boring and repetitive parts of their jobs, and even to better document internal and external processes (if the process has been put into code - especially well-commented code - that is a form of documentation about the process!) that can then be tracked and versioned in a version control system.
The Programming Loop
Why am I talking about automation when the title of this post is about filesystem events? Before we talk about those kinds of events, we need to take a look at a simplified but typical example of the programming loop when writing code. Such a loop might look something like the following:
- Make a change to some code
- Check the editor or IDE for feedback about the code you just wrote
- Run tools that check for programming errors, outright bugs, and stylistic errors
- Run the project’s tests (the project has tests, right?)
- Review tool and test feedback, and resolve issues
- Return to Step 1 and repeat
That’s a lot to do every time the code changes, but the tools are there to help write better code and catch a lot of little or subtle issues that people aren’t as good at noticing as computers can be. To take advantage of all of these tools and their output, we want to make this loop from writing some code to reviewing feedback as quick as possible so the programmer isn’t waiting around for feedback, gets annoyed at how long the process takes, and stops using the tools. Some projects also use a lot of different tools, and it can be a lot to remember how to run each tool individually and what command line arguments are required for the current project.
Instead of having to remember all this information, it would be great to have to remember fewer commands to get feedback from all the tools. Automation to the rescue! Instead of running each tool separately ourselves, we can use a tool that runs other tools; in this case we want to use a task runner.
make is a well-known tool for automating this process, and can be used for many different kinds of projects:
C programs […] are most common, but you can use make with any programming language whose compiler can be run with a shell command. In fact, make is not limited to programs. You can use it to describe any task where some files must be updated automatically from others whenever the others change.
Using a tool like
make can simplify the loop to something like the following:
- Make a change to some code
- Review tool and test feedback from each tool that
makeruns, and resolve issues
- Return to Step 1 and repeat
This significantly reduces the steps the programmer has to take every time they make a change as well as the number of commands they need to remember: all they have to do is run
make! But do we really need to remember to run a command? Is there some way that we can have
make or something like it run whenever we make a change to the code?
Enter: Filesystem Events
Filesystem events are a great way to trigger other tasks. Other kinds of task runners such as grunt and gulp in NodeJS, as well as guard in Ruby, let the programmer set up a process to “watch” individual files or entire directories for changes. When a file changes, these task runners pick up the change and perform whatever tasks the programmer has specified.
These functions respond to changes in the filesystem and communicate data about the detected changed file back to their listener functions.
A Simple Example
The following is an example of how to use the
fs.watch() function in NodeJS. This is the recommended function to use because it supports single files as well as directories where
fs.watchFile() only watches individual files, and it is more efficient than both
fs.watch() function does come with some caveats due to the state of this kind of filesystem event handling on various platforms; specifically, the API is not 100% consistent across platforms and some options are not universally available due to the underlying implementations.
fs.watch() is event-driven, there is no matching set of functions in the experimental
fs Promise API introduced in Node 10.0.0 and there is no synchronous equivalent like there are for many other functions in the
fs module because the task is inherently asynchronous.
However, this is still really useful functionality when working with files and filesystems, and is of particular interest in our pursuit of efficient automation.
One Last Twist
After looking a bit more into how grunt and gulp actually implement their file-watching functionality, it turns out that neither one of these popular test runners uses any of the
fs.watch() functions directly and don’t even use the same dependency to provide watching functionality!
Chokidar goes into some detail as to why
fs.watch() is insufficient, and gaze lists a number of alternative projects, so there are clearly some community opinions on the quality of the built-in functions.
Some notable functionality
- no wildcard support (e.g. cannot watch only for files that end with a particular extension, like
- does not support recursive watching on platforms other than Windows or macOS (see the fs watch caveats)
eventTypeis only one of
'change', no support for other file events such as
As in everything, there are trade-offs when implementing functionality. For simple use cases,
fs.watch() may be sufficient and will not require including additional project dependencies. However, if file watching is a core piece of functionality, it might be worth evaluating some of the other options available on npm to find one with the necessary functionality for the project at hand, while also keeping you from tearing your hair out tracking down odd edge cases.