jGetFile is geared towards mass downloading specifically non-html files from the web. Web crawlers and web site downloads are widely available and work very well. Not all web-based file crawlers are equal however. Few file crawlers handle this href scenario:
<a href="http://www.foo.com?url=http://youreallywantthislink.com/files"></a> jGetFile was engineered to be able to handle as many extraneous href configurations as possible. Although it does not handle all possible cases, more will be supported in future releases.
jGetFile supports a highly configurable means to filter the acceptance of links within the program. A user can use the -i or -e options to have the program only traverse links that start with the addresses the user specified, or to exclude links that start with the specified addresses.
Alternatively, for the power users, one can specify a BeanShell script through the -als option. This allows for arbitrarily complex rules to be specified for accepting links, like accept only links at depth 1 that begin with www.blah.com, exclude links at level 2 that begin with www.foo.com, and exclude links at level 2 that contain word cat. The depth variable is currently not available to custom scripts, but will be in the next release.
There is no gui in the works, and because of the intended simplicity of this program, one is not planned in the future either. Also, jGetFile is not intended to replace wget, although it's initial features were based on wget. jGetFile has the single goal of downloading files fast and efficiently from the web.