SPACED OUT SCRIPTING

Sometimes I get a bit too over-enthusiastic at the shell. 
Recursively replacing spaces with underscores in directory and file 
names seemed to me like a quick one-liner. I wasn't too far off, 
but I did spend quite a while tweaking things before it worked 
properly. Then the next day I noticed a script that I'd collected 
in ~/bin which is for this exact task, downloaded at some point 
from here:
https://github.com/ArthurGareginyan/space_to_underscore/

I do get a bit frustrated with myself when I do this, especially 
when what I come up with turns out very similar to the existing 
solution online. This time though it's interesting how different my 
approach turned out to be. I ended up with this two-liner (of 
course you could join them, but they make more sense separately):

                             ----
#!/bin/sh
# Convert spaces to underscores in directory and file names 
# recursively.
#  The Free Thinker, 2022.
# Will spit out (ignorable) errors for unspaced file/directory 
# names.

find -type d | while read dir; do mv "`echo \"${dir%/*}\" | \
tr ' ' _`/${dir##*/}" "`echo \"${dir%/}\" | tr ' ' _`"; done
find -type f | while read file; do mv "$file" "`echo \"$file\" | \
tr ' ' _`"; done
                             ----

The unwanted errors could be filtered out by piping stderr from the 
mv commands through 'grep -v', but I didn't really care.

The equivalent bit of Arthur Gareginyan's script:

                             ----
################### SETUP VARIABLES #######################
number=0                    # Number of renamed.
number_not=0                # Number of not renamed.
IFS=$'\n'
array=( `find ./ -type d` ) # Find catalogs recursively.


######################## GO ###############################
# Reverse cycle.
for (( i = ${#array[@]}; i; ));
do
    # Go in to catalog.
    pushd "${array[--i]}" >/dev/null 2>&1
    # Search of all files in the current directory.
    for name in *
    do
        # Check for spaces in names of files and directories.
        echo "$name" | grep -q " "
        if [ $? -eq 0 ]
            then
                # Replacing spaces with underscores.
                newname=`echo $name | sed -e "s/ /_/g"`
                if [ -e $newname ]
                    then
                        let "number_not +=1"
                        echo " Not renaming: $name"
                    else
                        # Plus one to number.
                        let "number += 1"
                        # Message about rename.
                        echo "$number Renaming: $name"
                        # Rename.
                        mv "$name" "$newname"
                fi
        fi
    done
    # Go back.
    popd >/dev/null 2>&1
done
                             ----

He puts the list of directories into a Bash array instead of piping 
them through 'read', then moves into each directory before 
processing file and directory names at the same time instead of 
doing them separately. He's also neater in that he bothers to check 
whether the name is changed rather than just letting 'mv' error-out 
when source and destination are the same. I'm tempted to think that 
my version would be quicker, but am too lazy to test it. I don't 
like his usage of Sed, unnecessary "-e", and "s/" where "y/" would 
suffice, plus I use 'tr' for that anyway, but that's nitpicking. 
I'd most likely have been perfectly happy with his script if I 
hadn't messed about writing my own, and it is more user-friendly.


Another not-quite-a-oneliner adventure that I embarked upon 
recently was with the aim of ticking off one of the last remaining 
TO-DOs for my Internet Client system, remote video conversion. This 
is again a case where formats have evolved, requiring newer 
software to go with them, and indeed for years I've already been 
converting videos from the web into a format suitable for my old TV 
media player rig based on a hacked video game console and some 
long-forgotten free software. Now the outdated ffmpeg on my old 
laptop (because I _still_ haven't switched over the the 'new' 
Thinkpad T60 for most things) is getting buggy with some videos 
found on the web, so I want to use the up-to-date version on my 
Internet Client. The Intel Atom CPU in the Atomic Pi SBC is also 
about five times as fast at doing the conversion than my poor old 
Thinkpad with its pre-DDR RAM (more of a bottleneck than the 1GHz 
Pentium III CPU I think, comparing with other 1GHz PCs), so that's 
nice too, though I'm rarely in a hurry.

I wanted to avoid temporary files because I've only got flash 
storage on the Internet Client and I don't want to be wearing it 
out, nor is its 2GB RAM really enough for comfortably putting big 
video files in tmpfs. The first answer was just to pipe the data 
in/out, with the pipes sent over the network by rexec.

Problem a) ffmpeg supports piped input/output, but doesn't put a 
valid duration in the output video file's metadata because it wants 
to work that out at the end of the conversion when it knows for 
sure how long the video _really_ goes for (there's no way to make 
it just trust the input file's duration metadata, apparantly). This 
means you can't seek when playing the converted video file, and 
worse they wouldn't play on my 'media player' at all! Problem b) 
Something funny goes on with rexec and the output video ends up 
corrupted when it's sent through it in a pipe. I'm not sure what's 
going on there, but problem a) was bad enough that it wasn't worth 
looking into it.

So I realised I needed a seekable output method, and initially gave 
up on streaming input/output in preference for NFS, which I'm 
already using heavily with my Internet Client for various other 
things. To avoid waiting for the file to finish copying over before 
starting the conversion I wrote a script that waited until a 
certain chuck had arrived and then started ffmpeg on the basis that 
it would never catch up from there. Over Ethernet this worked a 
treat, but I do the conversions with my laptop plugged into the 
USB-adapted HDD at the TV (OK it's old and inelegant, but I 
download and convert videos to watch in batches so it works for me) 
where it needs to connect over WiFi. It turns out the kernel 
mysteriously sends the data in intermittent bursts over WiFi, so 
ffmpeg runs out of data and fails.

Tried Rsync (protocol) - it only creates files once they've 
finished uploading, so that failed too. Then I remembered reading 
someone mention playing http URLs directly with ffplay, so I tried 
running a web server on the laptop and serving the input files over 
that. That worked, but I immediately abandoned it because while 
looking at the docs for ffmpeg's HTTP support I discovered that it 
supports FTP! (Gopher as well, believe it or not).

The docs note that seeking doesn't work with all FTP server 
software, so you need to specifically enable it with 
"-ftp-write-seekable 1". Which servers doesn't seeking work with? 
Tried PureFTPd - ffmpeg spits out an error saying not to use 
PureFTPd. Well _now_ you tell me! I had a long-forgotten 
lightweight FTP server called BetaFTPd installed, tried it, with 
ftp:// input and ftp:// output, and success! Well mostly, mplayer 
likes the output fine, but VLC insists that it's corrupt and has to 
'repair' it before playback, or else seeking is disabled. But it 
works on my 'media player', so I'm calling that close enough.

Finally I wrote a script which starts the FTP server, converts all 
files in the current directory by executing ffmpeg remotely with 
rexec and ftp:// input/output URLs, then shuts the FTP server down 
on exit. I also had to put a bind mount line in /etc/fstab so that 
the the USB-connected drive can be accessed from a directory 
visible to the FTP server. With that, my centralisation of software 
that 'expires' onto my Internet Client SBC is nearing completion.

 - The Free Thinker